id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
256099100
pes2o/s2orc
v3-fos-license
EXPERIMENTAL EVALUATION OF THE PERFORMANCE OF A DOMESTIC WATER HEATING SYSTEM UNDER BAGHDAD CLIMATE CONDITIONS The aim of the current study is to evaluate the performance of a domestic water heating system for residential areas in Baghdad climatic conditions for substituting electric water heaters with solar­po­ wered water heaters using solar collectors. Many countries, such as Iraq, are sluggish with electric power issues while receiving very high solar inso­ lation. Solar energy is a clean, non­depleting and low­cost source that can be used especially in resi­ dential areas, which forms a great percentage of energy consumption by replacing electric water heating with solar water heating to reduce elec­ tricity usage. Therefore, six flat plate solar collec­ tors with an absorbing area of 1.92 × 0.85 m with one 4 mm thick glass cover are utilized for expe­ rimental investigation under the Baghdad climatic conditions. The collector was tested under steady­ state settings, which assumed that sunlight inten­ sity, ambient temperature Introduction Significant advances in living standards have resulted in increased energy sources usage such as domestic hot water (DHW), air conditioning, and electricity. There is an imperative necessity to accelerate the evolution and developed clean energy technologies deployment to deal with global issues such as energy supplies, environmental degradation, and long-term development [1,2]. In developed countries, buildings need 30-40 % of annual primary energy, and around 15-25 % in developing countries [3][4][5]. However, it should be noted that the energy needed to prepare domestic hot water has a large share in the yearly energy consumption of construction [6,7]. In that consideration, there is a comprehensive tightening and restriction of the national building standards. As a result, the space heating requirements of newly constructed and renewed buildings are reduced dramatically, which tends towards nearly Zero Energy Construction standards [8]. Whereas energy consumption for building space heating has declined significantly over the previous decades, the energy requirements to produce DHW have stayed essentially constant. As a result, the contribution of energy for DHW in the overall energy balance of buildings is growing more and more dominant [9]. Globally, lowering energy consumption in buildings helps to reduce greenhouse gas emissions, fossil fuel extraction and environmental costs of transporting them [10]. There is an important requirement for extra energy storage, which could be accomplished in a variety of ways including thermal storage, electric batteries, flywheels, dams, etc. [11,12]. A significant portion of this thermal energy is utilized for space heating [13]. Water heating is one of the most important applications to use energy in homes, and domestic electric water heating systems (DEWH) provide a considerable heat reserve for transferring electric load throughout the day [14]. Replacing electric water heating with solar water heating can reduce electricity usage [15]. The use of solar water heaters is suitable for many countries sluggish with electric power issues such as Iraq. Due to the geographical position and suitable weather conditions, most sections of Iraq have very high solar isolation [16]. As a result, it is important to turn to solar energy choices in order to decrease the usage of fossil fuels [17]. Therefore, studies on domestic water heating systems (DHWS) are of scientific relevance, which is associated with reducing electric power consumption through the use of solar energy, especially for high solar isolation countries such as Iraq. Literature review and problem statement The global market potential, thermal engineering and economic viability of solar water heaters (SWH) are discussed [18]. Globally there are opportunities for further adoption of SWH to supply hot water in residential and commercial sectors [19]. In many countries, realizing these opportunities requires improved economic viability. This entails a combination of lower installed cost, improved system efficiency, durability and ease of maintenance [20]. The building sector is responsible for about 40 % of the overall final energy consumption, mostly due to space heating and domestic hot water (DHW) heating. Electric water heaters are substituted by SWHS to support space heating and domestic hot water (DHW) heating [21]. In this case, a solar water heating system (SWHS) as an application of solar thermal technology provides some of the heat energy requirements for domestic hot water (DHW) and space heating, supported conventionally by electricity or natural gas, or even other fossil fuels. Therefore, in the Middle East region and during the winter days, the increased demand for electrical energy can be reduced by using the SWHS system from an economy view [22]. The SWHS is known as a common application of solar energy where the received radiation is converted into heat and then transferred into a circulated medium, mostly water and air [23]. By this means, electric water heaters are substituted by SWHS to support space heating and domestic hot water (DHW) heating [24]. According to the aforementioned advantages and extensive developments in solar water heaters' design within the last 15 years, the global solar water heating market has been raised drastically [22,25]. Iraq's climate is characterized by a moderate winter with a minimum temperature of 5 °C and a hot and dry summer with a maximum temperature of 45 °C. The daily sunshine period in Iraq varies from 7 to 12 h in winter and summer, respectively. The average daily solar irradiance is 6.5-7 kWh/m 2 and the total annual sunshine period is at least 2,800 h [26]. The lack of power supply in Iraq, the abundant sunshine hours throughout the year, and the reasonable cost of the system are the motivations to install the SWHS to meet domestic and industrial hot water demands [27]. Accordingly, various techno-economic analyses of SWHS have been conducted by [28] using domestic hot water to reduce electric power consumption, while [29] studied a solar water heater to meet domestic requirements for industrial areas in Iran [30]. However, the solar water heater was an efficient system to produce hot water in the winter season [31]. Due to the geographical position and suitable weather conditions, most sections of Iraq have very high solar isolation [16]. As a result, it is important to turn to solar energy choices in order to decrease the usage of fossil fuels [17]. As absorbing from previous studies, high solar isolation countries such as Iraq depend mainly on electric energy to heat the water used even in residential areas, and the use of a water heating system using solar energy needs an actual and practical assessment to show that electricity can be dispensed with by using solar energy as a substitute in heating the water used in residential areas, and thus solar energy is a successful alternative to fossil fuels, which greatly reduces global warming and reduces environmental pollution. Therefore, in crowded cities such as Baghdad, this system needs an actual and practical assessment to figure out its importance in reducing the consumption of electric energy. The aim and objectives of the study The aim of the study is an experimental evaluation of the performance of a domestic water heating system under Baghdad climate conditions. This will make it possible to substitute electric water heaters with solar-powered water heaters using solar collectors. To achieve this aim, the following objectives are accomplished: -to identify the climatic parameters during the study period; -to evaluate the performance of a domestic water heating system. 1. Fabrication of a domestic water heating system and study hypothesis Six flat plate solar collectors are used in this work as shown in Fig. 1, a, b. Each collector has an absorbing area of 1.92× 0.85 m with one 4 mm thick glass cover. The absorber of each collector consists of ten equally spaced parallel aluminum risers of 10 mm outer diameter and 1.92 m long. These risers connect the lower header with the upper header made from aluminum risers of 18.75 mm outer diameter and 0.85 m long, the joints between headers and risers ends are welded by using aluminum alloy. The solar collector is insulated from the back and sides using glass wool insulation of 50 mm thickness. Two vertical cylindrical aluminum tanks with a thickness of 2 mm were used. The internal tank has an inner diameter of 500 mm, a length of 1 m, and a capacity of 170 liters while the outer tank has an inner diameter of 600 mm and a length of 1.1 m. The internal tank is employed for the open-loop (direct) water storage tank whereas glass wool is used to fill the annular gap between the internal and exterior tanks. Each tank has four holes for inlet and outlet points and also each tank has a ventilation hole. A glass wool insulator of 50 mm thickness is used to insulate the tank's external wall, and the storage tank can be positioned vertically as needed. The storage tanks are presented in detail in Fig. 2. Plastic pipes with a nominal diameter of 12.5 mm are utilized as linking pipes between system components. As a closed loop, the storage tank, flow meter, water circulation pump, and solar collectors are linked with each other. Fittings such as bends, elbows, and valves are employed as connecting parts. In each system, a small MARQUISE MKP60-1 type water circulation pump is utilized to circulate water in the closed loop. The pump consumes 370 Watts of electricity, has a maximum head of 40 meters, and a maximum flow rate of 40 liters per minute. Moreover, valves are used to control the flow rate in the closed loop. Fig. 2. First thermal storage tank in a vertical orientation The temperature is measured utilizing type k thermocouples (copper-constantan) at specific points in the heating system. The solar meter is utilized to measure the solar radiation that falls on the collector's surface. While a ZYIA-type flow meter with a range of 1 to 7 L/min was used to measure the mass flow rate of the forced circulation flat plate collectors. All instruments are calibrated in the Central Organization for Standards and Quality Control (COSQC). The hypothesis of the study can be summarized as follows: 1. There is no effect of accumulated dust on the system despite that Baghdad city faces a high rate of accumulated dust. 2. The collector was tested under steady-state settings, which assumed that sunlight intensity, ambient temperature, and inlet-outdoor temperature difference in each collector in the system were constant throughout the operation. 3. The interior tank is employed for the open-loop (direct) water storage tank whereas glass wool is used to fill the annular gap between the internal and exterior tanks. 4. No pressure is lost during water circulation. 5. Solar intensity completely reaches the pipes and no radiation is lost between these pipes. 2. Test procedure The solar water flat plate collectors with forced circulation were linked in a closed loop. The tests were conducted between October 2020 and February 2021, with most of them taking place on sunny days. The collector was tested under steady-state settings, which assumed that sunlight intensity, ambient temperature, and inlet-outdoor temperature difference in each collector in the system were constant throughout the operation. On a clear day, this period was generally 15 minutes: 1. The sort of test used to assess the immediate efficiency of a solar collector system. 2. The heater is fixed in the second tank to heat the water before it is circulated to the system and to investigate the influence of inlet temperature on system efficiency, as illustrated in Fig. 3. 3. The following preparations were conducted prior to each test. The closed collectors loop was full of water, the collector's glass cover was cleaned thoroughly and the measuring instruments and apparatus were checked as described in the previous section. 4. The storage tanks have been full of water, and the pump has been turned on for one hour prior to the start of the readings. 5. The experimental data were recorded at variable inlet temperatures in the system. 6. Three various mass flow rates of the load water withdrawal profile were used to test the system, these profiles are: the continuous load of 60 L/h, the continuous load of 80 L/h, which is equal to the daily usage of a single storage tank volume. 7. The load profile was recorded experimentally in winter and summer in the home of a family of 17 occupants, the experiments were recorded during the day. 8. In all the above cases, in each test and at every 15 minutes, all the measurements of ambient temperature, solar radiation, and temperatures of each point in the system were recorded. The solar flat plate collector is considered an essential part of the forced circulation solar water heating system because it simultaneously has two advantages. It gets solar ener gy immediately, and secondly, it conveyed this solar ener gy to heat to transport it to the storage tank. 3. Load profile Regarding the thermal load, although hot water consumption varies greatly from day to day and from user to user, the water usage trend during the summer is slightly greater than during the winter. However, during this time the temperature need for hot water is considered very low in comparison to winter. For the present work, three types of water withdrawal patterns are adopted, namely: 1. Continuous water withdrawal with a flow rate of 60 L/h. 2. Continuous water withdrawal with a flow rate of 80 L/h. 3. Daily water consumption taken for a family of 17 occupants for the summer and winter seasons. 1. Evaluaton of climatic parameters during the study period Global solar radiation, ambient temperature and wind speed were measured as shown in Fig. 4-7 on 1-7-2020, 15-6-2020, 15-7-2020 and 5-8-2020, respectively. The test of PV panels was conducted. The global solar radiation was ta ken at the horizontal surface and calculated for the tilted panel. Fig. 4 shows that the maximum solar radiation was at solar noon (985 W/m 2 ), it decays after that. The ambient temperature rises from 36.6 °C at 9:00 AM to 44.1 °C at 1:30 PM. Fig. 8 shows the maximum daily solar radiation (17,043 W/m 2 ) at a 15° tilt angle due to the maximum power generated at this angle. Fig. 9 shows oscillated values of wind speed taking 1.111 m/s as a minimum and 6.944 m/s as the maximum value. Another group of measuring solar radiation and ambient temperature data was conducted from 9-11-2020 to 20-1-2021, for testing series-connected flat plate collectors. A sample of these data measured on 9-11-2020 is shown in Fig. 10. The maximum solar radiation was 797 W/m 2 at 11:45 AM. The ambient temperature raises from 23.3 °C to 29 °C during the measured period. All climate parameters change arbitrarily and depend mainly on daily time. The values of these variables cannot be easily fixed and adopted as climate data, but can be read through practical experiments due to climate pollution in the city of Baghdad. Fig. 11-14 show the measured temperature at the inlet and outlet of each of the six flat plate collectors linked in sequence for the selected days, namely: 9-11-2020, 14-12-2020, 5-1-2021 and 19-1-2021. When the system is under an 80 L/h load profile (on 9-11-2020 and 5-1-2021), variable load on 14-12-2020 and 60 L/h on 19-1-2021. It is observed that each collector is responsible for raising the water temperature partly. Table 1 shows the maximum and minimum temperature rises for the six collectors during the day test. It is obvious that the effect of the sixth collector on water heating is lesser than that of the first. The total temperature rise for the selected days is shown in Table 2 for the supply load of 80 L/h. Table 1 Experimental results of maximum and minimum temperature for the solar collectors array on 9-11-2020 During the winter season (January-April and October-December), the best performance of the solar water heating system with good insulation of the storage tank was observed. The flat plate collector's outlet temperature rises during the morning to reach 60 °C and then begins decreasing until sunset. The outlet temperature of the solar thermal collector takes the manner of solar radiation. A decrease in the top layer's temperature during the hot water consumption periods can also be noted, and this temperature is approximately constant. Fig. 15, 16 display the influence of the inlet temperature on the solar collector heat efficiency for an 80 L/h water load. The heat gain of the collector array increases with decreasing inlet water temperature to collectors. The maximum calculated heat gain was Qu = 4,192 W for 30.8 °C Tin at 12:00 AM on 9-11-2020 as shown in Fig. 15 while that calculated for Tin = 40 °C was Qu = 3588.7 W at 11:15 AM on 5-1-2021 as a maximum value as shown in Fig. 16. This is due to a decrease in heat absorbed due to increasing inlet water temperature for the same water flow rate through the collectors (112 L/h). Fig. 15, 16 present the thermal efficiency of the collectors array showing the same behavior as the useful heat gain, higher values are obtained for Tin = 30.8 °C (54.2 % at 12:00 AM) than for Tin = 40 °C (η t = 48.9 % at 12:00 AM). Fig. 17, 18 show the effect of the load profile on the thermal performance, useful heat gain, and thermal efficiency of the array of solar collectors at the same inlet temperature. For an 80 L/h water load, the heat gain of the collector array increases with decreasing water inlet temperature to the collector. The maximum calculated heat gain was Qu = 4192 W for 80 L/h at 12:00 AM on 9-11-2020 as shown in Fig. 17, while that calculated for the variable load was Qu = 2868.7 W at 11:15 AM on 25-11-2020 as a maximum value. This is because the constant load of 80 L/h was more suitable than the variable load. Fig. 18 shows that the thermal efficiency of the collectors array has the same behavior as the useful heat gain. 2. 3. Effect of the load profile The thermal efficiency of the collectors array has the same behavior as the useful heat gain as shown in Fig. 18, higher values were obtained (54.2 % at 12:00 AM) for 80 L/h while that obtained for the variable load was η t = 36.3 % at 11:15 AM. Fig. 19 shows a comparison between the results of two days conducted at the same variable load. The maximum calculated heat gain was Qu = 4234.4 W at 12:30 PM on 14-12-2020, while that calculated for other variable loads at the same inlet temperature of 23 °C was Qu = 4672.6.7 W at 11:45 AM on 20-12-2020 as a maximum value. This is due to the effect of the load profile on the heat gain values. For the same days, as demonstrated in Fig. 20, the highest heat efficiency of the collector array was η t = 51.1 % at 10:30 AM on 14-12-2020 while that obtained for the same variable load was η t = 61.4 at 12:45 PM on 20-12-2020. The thermal efficiency of the domestic hot water system increases during daily time as a result of observing solar energy. It should also be noted that the daily consumption of hot water affects the solar system thermal performance. Fig. 21 depicts the impact of preheating on the heat efficiency of solar collectors for the variable load. 2. 4. Thermal performance with and without preheating Preheating impact on the heat efficiency of solar collectors for the variable load is shown in Fig. 21. The maximum calculated heat gain was Qu = 2439.9 W when using a heater at 13:00 PM on 16-12-2020, while that calculated when without using a heater was Qu = 2614.7 W at 13:15 PM on 29-12-2020 as a maximum value. This is due to a decrease in heat absorbed due to increasing inlet water temperature when using the heater in the solar collector array for the same water flow rate of 112 L/h. The main purpose of using an auxiliary heater in the solar collector array is to raise the temperature in the collector on days of no or weak solar radiation. The second target for using it in the solar array is to present the performance of the solar collector array in the summer season when the inlet temperature is high. Fig. 22 shows the influence of the heater setting on the thermal efficiency of the array of solar collectors for an 80 L/h water load. The heat gain of the collector array increases with decreasing water inlet temperature to collectors. The setting of the heater temperature must be suitable and not high. The maximum calculated heat gain was Qu = 1980.1 W for the 40 °C heater setting at 12:00 AM on 5-1-2021 as shown in Fig. 22, while that calculated for the 50 °C heater setting was Qu = 985 W at 13:15 PM on 14-1-2021 as a maximum value. This is due to a decrease in heat gain due to increasing inlet water temperature for the same water flow rate through the collectors of 112 L/h. Fig. 23 shows the same behavior of both thermal efficiency of the collectors array and useful heat gain, higher values are obtained for the 40 °C heater setting (32.0 % at 12:00 AM) while that obtained for the 50 °C heater setting was η t = 14.7 % at 13:00 PM. The thermal efficiency of the collector depends on solar radiation during the testing period inspite of the inlet collector temperature. Discussion of the evaluation of the domestic water heating system All climate parameters change arbitrarily and depend mainly on daily time. The values of these variables cannot be easily fixed and adopted as climate data, but can be read through practical experiments due to climate pollution in the city of Baghdad (Fig. 4-10). The thermal efficiency of the collectors array shows the same behavior as the useful heat gain, higher values are obtained in the summer season. However, the measured temperature at the inlet and outlet of each of the 6 series-connected flat plate collectors for the selected days. When the system is under an 80 L/h load profile, variable and 60L/h (Tables 1, 2, Fig. 11-14). It is observed that each collector is responsible for raising the water temperature partly, while the effect of the sixth collector on water heating is lesser than that of the first. The heat gain of the collector array increases with decreasing inlet water temperature to collectors. This is due to a decrease in heat absorbed due to increasing inlet water temperature for the same water flow rate through the collectors. Therefore, the thermal efficiency of the collectors array shows the same behavior as the useful heat gain due to the effect of the load profile on the heat gain values. However, the effect of preheating on the thermal performance of solar collectors for the variable load is due to a decrease in heat absorbed due to increasing inlet water temperature when using the heater in the solar collector array for the same water flow rate additionally. The main purpose of using an auxiliary heater in the solar collector array is to raise the temperature in the collector on days of no or weak solar radiation. The second target for using it in the solar array is to present the performance of the solar collector array in the summer season when the inlet temperature is high. The thermal efficiency of the collectors array has the same behavior as the useful heat gain with a great impact on the magnitude of load reduction, which was higher in the evening than in the morning, as a sufficient amount of hot water was provided by SWHs during the afternoon, depending on the pattern of household use of hot water. While, the average consumption of hot water is 40-50 L per person per day (Fig. 17, 18). Therefore, the solar water heating system is designed to supply a single-family detached house with 200 L of hot water at 50 °C per day (Fig. 19, 20). Many variables have an effect on the hourly distribution of DHWS consumption for a day and mainly depend on heated water consumption. This typically varies from day to day, season to season, and family to family. However, the total heat losses of the solar water heating system include the loss of heat to the indoor room and the loss of heat to the atmosphere without the collector's loss. Hence, this system can heat water in residential areas, buildings, and elsewhere, which represents a great energy backup and thus saves money for a longer time. However, the energy efficiency concept for SWHs provides sufficient information to establish a detailed study. The current study takes into consideration water heating for personal usage in residential areas according to the family persons, while area heating is not included especially in the winter season. On the other hand, this system is easy to modify for water desalination in the summer season in Baghdad city, which suffers from high dust accumulation rates and frequent dust storms. The system performance can also be enhanced by an optimum selection of the water tank capacity and the number of solar collectors with an increase of the tank isothermal layers. It is also recommended that further research should focus on the integration of these systems in other sectors (on a large scale) by improving their energy performance and taking into account the technical and economic situation of each country. Finally, several benefits have been cited that the DHW system is attributed to the exploitation of the region's unique renewable energy potential, the development of a sustainable economy, and ultimately the protection and conservation of the environment through the use of solar water heating. However, it is also recommended that further research should focus on the integration of these systems in other sectors (on a large scale) by improving their energy performance and taking into account the technical and economic situation of each country. Conclusions 1. All climate parameters change arbitrarily and depend mainly on daily time. The values of these variables cannot be easily fixed and adopted as climate data, but can be read through practical experiments due to climate pollution in the city of Baghdad. Despite that, solar energy has a significant effect on DWHS in Baghdad climate conditions. 2. The thermal efficiency of the collectors array shows the same behavior as the useful heat gain, higher values are obtained in the summer season with a high surface temperature. While the thermal efficiency of the collectors array has the same behavior as the useful heat gain in the winter season according to the solar radiation and its tilt angle. The thermal efficiency of the collectors array has the same behavior as the useful heat gain with a great impact on the magnitude of load reduction, which was higher in the evening than in the morning, as a sufficient amount of hot water was provided by SWHs during the afternoon, depending on the pattern of household use of hot water. While the average consumption of hot water is 40-50 L per person per day. Many variables affect the hourly distribution of DHWS consumption for a day and mainly depend on heated water consumption. This typically varies from day to day, season to season, and family to family. However, the total heat losses of the solar water heating system include the loss of heat to the indoor room and the loss of heat to the atmosphere without the collector's loss. However, the effect of accumulated dust leads to a decrease in the collector efficiency, while Baghdad weather is classified as a dusty climate. Meanwhile, the proposed model for higher occupancy areas is required for saving electric power. Conflict of interest The authors declare that they have no conflict of interest in relation to this research, whether financial, personal, authorship or otherwise, that could affect the research and its results presented in this paper. Financing The study was performed without financial support. Data availability Data will be made available on reasonable request.
2023-01-23T16:18:47.489Z
2022-12-30T00:00:00.000
{ "year": 2022, "sha1": "1f68cc75cab5a5dfed41576ac8884f635c95859d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15587/1729-4061.2022.268026", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d3d45163e522fa7a441298545a69bc551c7f8723", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
67754404
pes2o/s2orc
v3-fos-license
The role of small specimen creep testing within a life assessment framework for high temperature power plant ABSTRACT The safe operation of ageing materials and structures operating at high temperature and pressure is challenging. This paper, for the first time, clearly addresses the role of small specimen creep testing methods within a practical life assessment framework, with a case study used to illustrate the main principles and applications. To enhance the current practice for assessing the condition of creep ageing components, more proactive use of small specimen testing methods is proposed for the in-service condition assessment of power plant materials, earlier in the plant lifecycle and within a holistic life assessment framework. The paper describes how small specimen creep testing methods and other complementary tools can be used together in a new structured approach to life management. There is a clear need to provide the plant owner with more reliable and effective life prediction tools, based on earlier and more rigorous assessment of in-service life consumption. A novel, holistic, life assessment methodology including the use of small specimen creep testing has been developed by the authors. Introduction Power stations are designed and maintained to operate for long periods safely and reliably in accordance with statutory requirements. In the UK, these statutory provisions are detailed in the pressure systems safety regulations (PSSR) [1], which are applicable to both conventional and civil nuclear generating plant. Further to the PSSR, UK civil nuclear advanced gas cooled reactor (AGR) generating plant is subject to licence conditions stipulated by the office for nuclear regulation [2]. The aim of these regulations is to prevent injury from the hazard posed by high stored energy systems because of a failure. Subsequent pressure system inspections must be undertaken by competent persons and in accordance with written schemes of examination that identifies the systems or components requiring inspection. These written schemes take into account the relevant ageing mechanisms, asset condition including past inspection, assessment history and intended future operation. The inspection periodicity between major statutory outages on UK conventional fossil-fired generating plant is every 4 years, with a short interim outage scheduled after 2 years. Typically, AGR generating plant is subject to a periodic pressure systems outage inspection every 3 years. Understanding residual life (and critically the mode of failure and rate of deterioration) is essential to maintain safety and to enable the utility to plan for any repair or replacement activities in a timely and cost-effective manner. The operational duty of the high temperature and pressure steam circuits differ considerably between an AGR plant and a conventional fossil-fired plant. Large conventional fossil-fired plant are operated far more flexibly and at higher temperatures than AGR plant (design temperature of 568°C for conventional fossil fired plant vs. 540°C for AGR plant), but with similar maximum pressures of ∼170 bar; this reflects process conditions as the steam exits the boiler. There are other high temperature components aside from pressure systems in operation that could equally benefit from the use of small specimen creep testing, such as high pressure (170 bar) and intermediate pressure (40 bar) steam turbine rotors. The potential use of small specimen creep testing on these components is however considered more difficult due to limited access, since periodic overhauls typically occur every 12 years and with the turbines removed from berth. On these rotors access to the most relevant locations for material sampling is more restricted, for example the turbine disc steeples are of great interest, however these locations are precision machined and a sample could not be feasibly extracted without undertaking a costly repair to the disc head. Hence, the use of small specimen creep testing of material extracted from these components is more likely to be a practical choice on a rotor that has been retired from service, where the information obtained from the examination could be used to provide a more informed view on the condition of other in-service rotors. The review published by Hyde et al. [3] in 2007 focussed on the use of small specimen testing methods to satisfy data requirements. This paper is applicable to both conventional and nuclear plant applications and also provides a review of the current state-of-the-art of small specimen creep testing, but focussed on plant application. Subsequently the work emphasises the following: . Techniques currently used to assess the residual creep life of in-service plant, . A proposal is presented that describes how the more proactive use of small specimen testing methods could improve the approach to creep life expiry and support subsequent run-repair-replace decisions. Examples of in-service inspection data and a detailed case study based on high temperature and pressure pipework on conventional fossil-fired plant operation is used to illustrate the above points. Pipework systems are chosen for the case study because of the extensive amount of through life inspection data available as well as extensive research studies related to material creep behaviour and modes of failure. Understanding the contribution of prior service duty to creep life consumption and a prediction of the future rate of ageing is essential if a utility is to optimise its generating output and return on investment, with respect to both maintenance and capital infrastructure costs. Figure 1 shows historical operating data from two 2000 MW fossil fired conventional power stations presented as unit operating hours divided by unit starts; the data presented for each station has been averaged across all of the units on each site. This shows a dramatic variation in operating hours-to-starts ratio ranging from >500 during the early operational years, which reflects significant operation at constant (base) load to a dramatic change in the mid-1990s, to around 25 h-to-starts. This significant reduction in hours-tostarts ratio indicates a move to more flexible plant operation, which is driven by the demands of the commercial market. Figure 2 shows specific unit average operating hours and starts for station 'A'. If all eight 500 MW units operating in stations 'A' and 'B' are considered the range of unit operating hours is from 250 to 280 khr and the range of unit operating starts is from 2500 to 4100. To put this operational data into context particular reference is made to a UK parliamentary study on UK energy policy [4], which describes the significant impact of the privatisation of the central electricity generating board (CEGB) in 1990 on these figures. Both current and future operation is greatly influenced by legislation regarding environmental compliance. As a consequence of this legislation it is anticipated that stations 'A' and 'B' will from 2016 onwards revert to limited annual operational hours and unit starts (up to ∼2000 h and 100-200 starts). This illustrates the challenges associated with optimal deployment of small specimen testing methods (or any other monitoring or life assessment approach); the methods have to account for a wide range of operating duty throughout the stations operational life. In the UK, the approach to managing the integrity of the high pressure and temperature pipework systems and components on conventional fossil-fired plant is based primarily on an inspection-based assessment (IBA) approach. This IBA approach is practically implemented in line with the statutory outage inspection periodicity and consequently it has no specific requirement to provide longer-term predictions of component degradation rates. For the utility, the goal of optimised life management would be realised with the following: . Timely replacement of life expired components (maximising the useful service life), allowing the utility to budget and plan for cost-effective replacement; . Continual optimisation of inspection plans during unit shutdowns; making the best use of all available condition monitoring and outage overhaul data; . More timely use of on-load monitor data and outage inspection findings to update and manage the plant risk profile within acceptable levels; . Avoidance of significant unplanned and costly outages. In this context, the development and more proactive use of small specimen creep testing methods for the assessment of the integrity of high temperature and pressure systems provides a significant opportunity to reach this goal, but in conjunction with other complementary assessment tools that are outlined in this paper. Specialised small specimen creep testing In the last 20 years small specimen creep testing techniques (SSTT) have been increasingly developed to evaluate creep properties for materials relevant to components operating at high temperature in power plant applications, to assess their remaining life and avoidance of premature failures [5]. A way forward to establish this key material data is considered to be the use of standard size uniaxial creep tests, which are well-standardised mechanical test techniques, but requiring a large amount of material to be sampled from the operating components. Furthermore, it is often needed to perform analyses on critical locations on operating components, e.g. the heat-affected zone of welds, or to risk rank the most vulnerable operating components within a larger population. In order to avoid these particular difficulties, innovative non-invasive testing techniques, such as the small punch, impression, small two-bar and small ring creep tests, have been investigated by several authors, because of the relatively small volume of material they require. In practice, these techniques have been used to provide a ranking of material creep strength, thereby assigning a suitable inspection priority that has been adopted during a plant outage [6]. Furthermore, small punch creep testing has found applications in the nuclear field, especially for the characterisation of irradiated materials (see the section 'Applications for irradiated materials'). Scoop sampling In order to manufacture miniature specimens from inservice components, scoop sampling techniques are currently used for industrial procedures [7,8]. Rolls-Royce produced the SSamTM-2 sampler for the extraction of material samples, shown in Figure 3; the machine has a length of 420 mm and presents a hemispherical shell cutter of 50 mm in diameter, which can remove a 'button' shaped sample of material, that is, typically, 25 mm in diameter and 4 mm thick [8]. Rolls-Royce has also designed two other smaller samplers, one able to fit within a 38 mm diameter tube bore and the second within a 30 mm annulus [8]. Scoop sampling techniques are such that the wall thickness of the component does not become smaller than the design wall thickness after some material has been removed [9]. Nevertheless, an increase of the creep strain rate in the sample area has been observed by finite element (FE) analyses of drum, steam pipes and turbine rotor bores. This is due to the increase in stress at the bottom of the excavation, which is up to 1.2-1.5 times greater than the stresses in the intact wall in the same depth from the external surface of the component [10,11]. In order to assess the creep damage due to this stress increment caused by the scoop sampling, Klevtsov and Dedov [11] carried out FE analyses of a steam pipe with an outer diameter, D 0 , of 325 mm and a wall thickness, t w , of 30 mm, and creep cavity count on three bends steam pipes of 12Ch1MF and 15Ch1M1F. Two important conclusions can be deducted from their research; firstly, after 23-27 khr of service, the analysed components presented the same creep damage in the region under the scooping area and at a distance of 10 mm from that. Secondly, tensile properties of the materials remained the same during service and were independent of the excavation location and of the specimen typology [11]. Brett carried out 170 impression creep tests (ICTs), scoop sampled by using the SSamTM-2 on CrMoV main steam and hot reheat pipes, which had a wall thickness of 60-65 mm and 30-35 mm, respectively [12][13][14][15]. He found that the scoop technique can be considered to be 'quasi-non-destructive' since it does not require the repair stage as long as the maximum excavation depth does not exceed 10% of the wall thickness of the main steam pipe. For the hot reheat pipe a depth larger than 10% of the wall thickness is acceptable only over a restricted area in the centre of the excavation [15]. Rouse et al. [16] investigated the effects of scoop sampling on the creep response of straight pipe sections with different geometries, for a range of cut depths from 1 to 5 mm and for different load conditions, for the material 0.5Cr0.5Mo0.25 V at 640°C. They found that the presence of the excavation generates a localised stress around the base of the excavated notch. In the case of a static application of the load to the pipe, the condition is considered safe if the stress at the base of the excavation does not exceed the magnitude of the stress at the inner surface. This condition is achieved only when the depth of the excavation, h ′ , is 1 mm for a pipe section with D 0 = 360 mm and t w = 60 mm. Rouse et al. [16] also propose a parametric equation which can estimate the scoop sample riser effect for a wide range of materials and pipe geometries, here given in Equation (1), where A, B, C, … , I are fitting constants. Future study is needed on the effects of scoop sampling on the sampled component under the effects of cyclic load conditions. These studies on the effect of the scoop excavation suggest that a degree of prior assessment is required before a sampling exercise is undertaken to consider the excavation depth, component geometry, material type, material condition and the operational duty of the plant. The study by Rouse et al. [16] considers operating temperatures at 640°C, which is much higher than the typical plant operating temperatures (∼570°C ) that Brett's study alludes to. This emphasises the need to measure on plant or estimate by analysis, the operating temperature related to a proposed sampling location. For practical purposes a scoop sample location would be identified as a location on a future inspection schedule, noting that the additional cost and effort for the inspection of scoop sample locations typically is insignificant when compared to the full inspection scope during an outage. Ideally, all scoop sample locations would be accessible for repair should the need be foreseen. Reference stress method A major concern in the evaluation of test data of small specimen creep testing techniques is their correlation with uniaxial creep data. The approach currently used for data conversion is the reference stress method [17,18]. For materials obeying the Norton creep law, here reported in Equation (2), where A ′ and n are material constants, the reference stress method involves calculating two reference parameters, η and β, such that a relationship between the equivalent uniaxial stress, σ ref , and the applied stress, σ, and a relationship between the creep strain rate in the steady-state,1 c ss , and the creep displacement rate obtained by SSTT,Ḋ c ss , are established. D c ss can be expressed as a function of the creep material Figure 3. Photographs of (a) scoop sampling in process on pipework, and (b) a typical scoop sample, from Hyde et al. [112]. properties, the dimensions of the specimen and the nominal stress, σ nom , as reported in Equation (3). The reference parameter η is defined as a material independent, non-dimensional constant such that the ratio f 1 (n)/h n is also constant with n. Thus, the equivalent gauge length (EGL) of the structure can be defined as in Equation (4). It should be noted that the EGL does not vary with n since none of the terms in Equation (4) does. If s is a characteristic dimension of the sample, for example a length, the reference parameter β can be expressed as in Equation (5) and again it should be noted that this constant is independent of n. Codes and standards for small specimen creep testing techniques Agreed standards for small specimen creep testing techniques still do not exist, but two codes of practice have been released for small punch creep test (SPCT) by the European committee for standardisation in 2006 [5] and by the standardisation administration of China in 2012 [19,20]. Efforts in developing a standard draft for the SPCT are also ongoing in Japan [21]. In the U.S.A, an ASTM standard test method for small punch testing of ultra-high molecular weight polyethylene used in surgical implants has been released, but does not concern creep testing [22]. The European CEN CWA 15627 [5], last updated in 2007, has been accepted as a first iteration for industrial applications all through the world, even if it is far away from being a recognised standard and certainly needs to be improved [23]. The CEN code of practice consists of two parts, Part A is related to specifications for the test rig, test procedure and interpretation of results for small punch test designed for material characterisation [5]. Part B is focused on the test rig, test procedure, test specimen preparation, interpretation of results and methods for deriving the yield strength, the ductile-to-brittle transition temperature (DBTT) and the fracture toughness [5,23]. Hurst and Matocha [23] published a critical review on the CEN CWA 15627 also suggesting some potential improvements of the code. One of their concerns is related to the shear punch test, which is not included in the code, even if it has been proven to be a reliable technique for the evaluation of tensile properties [23,24]. An open problem regarding small punch creep testing is the correlation between the load level applied to the small disc specimen and the stress induced in a conventional uniaxial creep test that exhibits the same time to rupture. Some equations for data interpretation have been proposed in the CEN CWA 15627 and are here reported in the section 'Small punch creep test', but the procedure is still not totally accepted for industrial applications and further investigation on the complex behaviour of the specimen during testing is ongoing [25]. Currently used test methods Reliable bulk behaviour can be obtained if at least eight grains are contained through the specimen thickness [26]. Among the small specimens described in this section, the smallest dimension of the samples is the small punch disc thickness, equal to 0.5 mm. The materials generally used in power plant have at least 20 grains through such a thickness, allowing consisted results to be obtained. Impression creep test In the ICT a steady constant load, P, is applied to a flatended indenter in contact with the specimen at a fixed temperature. The test output is the variation of the indenter displacement with time, which is related to the creep properties of the small volume of material in the contact region between the specimen and the indenter. Figure 4 (a) shows the typical test geometry. If the indenter is rectangular, the reference stress approach can be used to establish the corresponding equivalent uniaxial stress, σ ref , and the creep strain rate in the steady-state,1 c ss , for materials obeying the Norton creep law [27]. Equations (8) and (9), where d is the indenter width, express the relationships between them and the mean indenter pressure, p, and creep displacement rate,Ḋ c ss . The reference stress parameter, η and β, do not depend on the impression depth, if the creep displacement Δ is relatively small compared to the specimen thickness. OnceḊ c ss is known, for example by FE analyses, for different values of n, it is possible to calculate β through Equation (10), by using assumed values for the stress multiplier, α [27]. The value of α such that β does not vary with n, is the reference stress parameter η [27]. The η and β values for the recommended geometry (w × b × h = 10 × 10 × 2.5 mm) are η = 0.43 and β = 2.18, for an indenter width of d = 1 mm [27]. A typical set of deformation data obtained from such tests for a high temperature CMV (0.5%Cr0.5%Mo0.25%V) steel is shown in Figure 5(a). The slight fluctuations in the data observed in Figure 5(a) are mainly caused by temperature variations within the furnace and laboratory. However, it can be seen that these variations are typically well within ±1 μm [28]. A comparison of minimum creep strain rate data for CMV and 316 stainless steel is illustrated in Figure 5(b). The ICT data lies on the same straight line as uniaxial creep test data, confirming the reliability of the ICT. Small punch creep test The SPCT consists of the application of a constant load, P, to a hemispherical indenter in contact with a disc specimen clamped between an upper and lower die containing the receiving hole. Figure 4(b) shows the sketch of typical specimen and punch, while a schematic cross section of the test rig is given in Figure 6(a). The European committee for standardisation has released a code of practice which provides standard dimensions of both test rig and samples to be tested [5]. According to that, the disk diameter has to be between 3 and 10 mm, the specimen thickness, t 0 , between 0.2 and 0.5 mm, and the punch radius, R s , between 1 and 1.50 mm. Li and Sturm [29] found a third order polynomial based on Chakrabarty's membrane theory [30] and valid for R s = 1.25 mm and a p = 2 mm (receiving hole diameter), which correlates the strain, ε, at the contact boundary between the punch and the disc and the central displacement of the punch, Δ, as shown in Equation (11). An empirical relationship between the applied load and the membrane stress, σ m , is also given in the CEN code of practice and is here reported in Equation (12) [5]. These relationships are only valid when bending deformation of the specimen is negligible and thus the deformation mode can be assumed to be governed by membrane stretching. This happens when large deformations are exhibited by the specimen during the test, that is, from an engineering estimation, when Δ > 0.8 mm [5]. However, according to many researchers, during SPCT the specimen deformation is caused by bending prior to membrane stretching [31][32][33] and a recent study, confirming this theory, suggests upper limits to Δ, depending on the specimen and the punch dimensions [34], as shown in Table 1. Another empirical relationship, always derived by Chakrabarty's membrane theory, is reported in an annex of the code of practice [5], and can be used to derive the load to be applied for the SPCT, here Figure 5. (a) Impression deformations with time at 90 MPa and 600°C obtained from ex-service CMV steam pipe samples, from Sun et al. [28], and (b) minimum creep strain rate (MSR) data for 316 stainless steel at 600°C and 2-1/4Cr1Mo weld metal at 640°C, obtained from uniaxial and ICTs, from Sun et al. [28]. given in Equation (13), where K SP is a correction factor depending on the tested material. In order to find K SP , at least five SPCTs are necessary as well as a comparison with a conventional creep test [5,35]. This parameter generally ranges between 1 and 1.5 [36]. An additional difficulty in SPCT data interpretation is the notable variation of the reference parameters due to large deformations involved in the test [35] and, although it is still possible to define η, as in Equation (14), in virtue of Equation (11) a constant EGL, and thus a constant β, cannot be determined [37]. Figure 6(b) shows typical SPCT data output, in terms of displacements versus time, for a P91 steel at 600°C , while Figure 6(c) shows converted creep rupture data (using Equation (11), with K SP = 1.275), obtained from a SPCT on a P91 steel at 650°C, compared with corresponding uniaxial data. SPCT data are in good agreement with uniaxial data. Figure 4(c) shows the typical specimen and loading system of the small ring creep test method that involves diametrically loading in tension a circular or elliptical ring. The test output is the deformation versus time curve, and similar to the ICT, this needs to be related to the data obtained from a standard uniaxial creep test, using the reference stress approach. An analytical solution for the load-line displacement rate of the elliptical ring in the secondary region of the creep curve has been developed at the University of Nottingham in 2009 by assuming as the main hypotheses the effects Figure 6. (a) Cross section of a typical experimental set-up used for SPCTs with dimensions in (mm), from Cortellino [51]; (b) typical SPCT data output for a P91 steel at 600°C, from Cortellino et al. [49]; (c) converted creep rupture data (using Equation (12), with K SP = 1.275) obtained from a SPCT (SPT in the graph) on a P91 steel at 650°C, compared with corresponding uniaxial data, from Hyde et al. [41]. Small ring creep test of the shear stresses to be negligible, bending as the governing deformation mode, and material obeying Norton's creep law [37]. The solution is given in Equation (15), where a ′ and b ′ are the ellipse halfaxes and b 0 and d ′ are the thicknesses in the radial and axis directions, respectively, P is the applied load, while Int 2 is defined in Equation (16), where θ is the angular coordinate of the small ring system, as shown in Figure 7, and θ ′ the value of θ for null bending moment, M. If the ring is circular a ′ = b ′ = R, where R is the radius of the ring. By virtue of the reference stress method, the equivalent uniaxial creep stress and the equivalent uniaxial creep strain rate for the load-line deformation rate,Ḋ, are given in Equations (17) and (18) [37,38]. For the full mathematical treatment, refer to Hyde and Sun [37]. Large deformations involved during the test do not significantly affect the conversion parameter η, the thickness in the radial direction, b 0 , and the thickness axis direction, d ′ ; while they affect a ′ , which decreases during the test as well as σ ref ; the conversion parameter β also varies with the ratio a ′ /b ′ at each instant of time [37,38]. As a consequence, the creep curve still shows primary and secondary regions, but the latter is characterised by a finite curvature rather than a constant rate [38]. In other words, with this test, a constant strain rate is not quite achieved, but it is still possible to calculate the equivalent uniaxial data by using the method proposed by Hyde et al., which involves calculating the instantaneous values of a ′ /b ′ , η, β, σ ref and1 c at many intervals in the secondary region. The average of the calculated values of σ ref and1 c can be then assumed to be the equivalent uniaxial data [38]. In order to obtain a better-defined steady-state, Hyde et al. [38] are developing a software capable of varying the load during the test according to Equation (17) as the specimen shape changes. The test results for circular (a ′ /b ′ = 1) and elliptical (a ′ / ′ b ′ = 2) rings, with R/d ′ = 5, for a P91 steel at 650°C , with a range of equivalent uniaxial stresses, are shown in Figure 8(a) in terms of displacement, Δ, versus time, and in terms of minimum creep strain rate versus the applied stress in Figure 8(b). It can be observed that the method proposed by Hyde et al. [38] allows highly accurate secondary creep properties to be obtained, since the small ring data are on the same straight line as the uniaxial data. Small two-bar creep test Another small specimen experimental technique which has been recently developed at the University of Nottingham by Hyde et al. [39] is the two-bar specimen (TBS) test. Figure 9(a) shows the typical test rig, while Figure 9(b) shows the typical two-bar specimen, which consists of two relatively slim bars connected by supporting ends. The sample is creep tested by applying a constant load to the supporting ends through the use of two pins (Figure 9(a)). The specimen geometry consists of the initial bar length, L T , which is the distance between the centres of the loading pins, the length of the loading pins supporting end, k, the specimen thickness, d T , the diameter of the loading pins, D T , and the bar width, b T [39]. Hyde et al. [39] recommended L T /b T = 4.5, D T /(2b T ) = 1.25, k/D T = 1.3 and a bar length, L T , of 26 mm, which is larger than the diameter of the SPCT specimen recommended in the CEN code of practice [5], i.e. 8 mm, and of the ICT specimen recommended in Hyde and Sun [27]. The equivalent uniaxial creep stress and the equivalent uniaxial creep strain rate are again defined through the reference stress method and here given in Equations (19) and (20), where η and β are determined by FE analyses, as for ICT, and their values are 0.9866 and 1.456, respectively [39]. However, those values were obtained by the use of Norton's law, thus the FE analyses could well predict the primary and secondary region of the creep curve, but not the tertiary region. Consequently, further investigation in determination of the reference Figure 7. Reference frame of the small ring, adapted from Hyde and Sun [37]. During the test, the creep deformation of the specimen is essentially due to stretching under uniaxial stress, as well as the rupture of the bars. Bending occurs in the area of the contact between the sample and the pins, but it has been proven not to have a significant effect on the failure mode and on creep deformation [40]. This behaviour of the specimen is also observed by FE analysis results, as shown in the contour plot presented in Figure 10, where the specimen failure of P91 steel at 600°C and applied stress of 170 MPa is achieved when the damage variable, ω, approaches unity. Two-bars creep test output is very close to uniaxial creep test output because most of the uniform section is predominantly under a uniaxial state of stress. This is the reason why η is almost 1. Typical small two-bar creep test data output and the corresponding correlation with uniaxial tests are shown in Figure 11. TBS data are in good agreement with uniaxial data, as their MSR trend is still on the same straight line of the uniaxial data, as well as for impression and small ring creep tests data. Evaluation of current test methods The selection of a small specimen creep test technique is dictated by factors such as, economics, the type of data required, e.g. creep rupture data or creep strain rate data, material to be tested and the test conditions [41]. In practice, the ICT method is easy to perform and it gives accurate output in terms of minimum creep strain rate, particularly at relatively high stresses and in the heat-affected zones of welds. This technique appears to be useful in power plant component life assessment [6], and has been deployed in-service to support decisions to continue operation as opposed to undertaking immediate repairs. Despite these advantages, the test has some limitations, e.g. the indenter creep resistance needs to be two or three order higher in magnitude than that of the specimen. Also, tertiary creep behaviour cannot be obtained, because of the compressive stress field in the specimen which does not allow crack nucleation and propagation [26]. The deformations involved during the test are very small, of the order of μm, requiring very accurate data acquisition and temperature control systems. Furthermore, FE analyses of this test are complicated by the contact interaction between the indenter and the specimen. The same problem is observed in numerical simulations of SPCT, where the contact is also non-linear. The SPCT potentially allows the entire characterisation of material behaviour up to failure, because the specimen is taken to rupture and it can also be used to perform focused analyses on critical locations of operating components. Despite these advantages the interaction of several non-linearities, such as large deformations, large strains, non-linear material behaviour and non-linear contact interactions between the specimen and the punch, induces a complex multiaxial stress field in the specimen which also evolves in time [42][43][44][45]. This affects the SPCT fracture mechanism and introduces several challenges for the identification of a robust correlation to convert SPCT data into respective standard uniaxial creep test data [35,46]. Another major concern is the non-repeatability of the testing method, since the experimental results depend on the set up geometry [5]. For this reason, a well-established and universally accepted method for data interpretation still does not exist. Progresses in the understanding of the complex behaviour of the specimen have been made through FE analyses and microstructural studies. In particular, the implementation in a commercial FE code of the Liu and Murakami damage model [47], by the use of a subroutine, leads to good simulation of the crack propagation in the small punch specimen, as shown in Figure 12, where a comparison between SEM images of interrupted tests and FE results of P91 steel is illustrated. Figure 12 also confirms Chakrabarty and other researchers' statement that the necking occurs at a certain distance from the specimen centre due to friction [30,32,34,48] and that the zone affected by the maximum damage spreads from the bottom of the specimen through the thickness in the necking area, as reported in the open literature [33,49,50]. The creep mechanism characterising the deformation of the specimen, that is represented by dislocation creep in the secondary creep regime and inter-granular cavitation when the tertiary creep stage is reached, is not affected by the different temperature values of the test, 600°C, and of that assumed for the material properties in the numerical simulations, 650°C [34,51]. Figure 13 shows the typical uniaxial creep test output in terms of strain versus time, and illustrates that the crack propagation starts in the tertiary region of the creep curve if the uniaxial creep test is carried out at constant stress. By comparing Figures 6(b), 12 and 13, the different behaviour of the small punch specimen with respect to the uniaxial specimen is evident. In fact, the crack propagation in the small punch disc starts in the secondary region of the creep curve instead of the tertiary region, as observed for the typical uniaxial specimen. Difficulties in obtaining good agreement between FE analysis and small punch test time to failure are due to the effects of punch load misalignments, initial plasticity and approximation of the friction formulation between the specimen and the test machine components [34] used in the FE analyses. An increase of the failure life has been found to be up to 10 times [39]. Only a quarter of the specimen is modelled due to symmetry. when initial plastic deformation is included in the model [31]; the time to failure can also increase up to 8 times when the friction coefficient between the specimen and the punch varies from 0 to 0.5 [51][52][53]. Punch load misalignments can also increase the time to failure by up to 1.49 times, in the worst possible situations [54]. The small ring creep test method is able to provide accurate minimum creep strain rates, especially at relatively low equivalent uniaxial stresses with a unique application of this test type being for creep resistant materials. During the creep test, the small ring specimen is subjected to relatively large deformations, from which relatively small strains are obtained with a lower equivalent stress than that applied to other miniature specimens. As shown in Figure 8(b), the minimum creep strain rate data obtained from circular and elliptical ring creep test at stresses between 50 and 65 MPa lie on the same straight line as the uniaxial data obtained at higher stresses, between 70 and 100 MPa. The deviation of the elliptical ring data from the straight line is in the range of the typical scatter of creep tests, which is about of 8%. Future development involves the establishment of time-dependent geometric correction functions to compensate for the effects of geometry changes during the deformation process [37,38]. Figure 11. (a) Deformation versus time curves obtained from two-bar specimens for a P91 steel at 600°C, (b) Creep rupture data and minimum and (c) creep strain rate data obtained from two bar and uniaxial specimens for a P91 steel at 600°C, from Sun et al. [28]. (d) Fitted uniaxial and converted (by using Equation (20)) two-bar strain versus time curves for a P91 steel at 650°C, from Sun et al. [113]. The small two-bar creep test allows production of full uniaxial creep curves, but further experimental data and validation are necessary. With this technique, the pins must be made of a material characterised by higher creep strength than that of the specimen, therefore there is a limitation in the range of materials which can be potentially tested. In the case of testing high creep strength materials, e.g. nickel-based superalloys for turbine blades, using the two-bar specimen, careful consideration of the specimen design should be given. Relatively larger diameter pins and smaller cross-sectional areas of the test sections of the two-bar specimen should be adopted to reduce the mean contact stresses between the pins and the specimen. In such a case, the material for the pins can be chosen to have a similar creep strength as that of the tested material. The contact between the pins and the specimen induces a bending deformation in the extreme regions of the specimen, while the stress distribution in the bars is almost constant [40]. In view of this feature, the reference stress parameters for this specimen type are almost one, as the stress field in the effective section of the structure is considerably similar to that found in conventional uniaxial creep test specimen. Furthermore, the specimen is taken to failure, therefore, contrary to the impression creep and the small ring creep tests, the tertiary creep region of the material can be characterised and the failure behaviour identified. In conclusion, good agreement between small specimen creep data and uniaxial creep data have been obtained with all of the techniques described so far, particularly with good correlation against minimum creep strain rates. Both small punch and small twobar creep tests are able to predict the specimen failure life, but difficulties in data interpretation of small punch data have not been overcome yet, while, methods for data interpretation of the two-bar test are still under investigation, even though this test has already been validated for steel materials and a nickel-based super-alloy material. Impression and small ring creep tests are both suitable for the determination of the steady-state creep properties. The noise in the ICT output can be of the same order of the output itself, since very small deformation occurs during the test. An opposite problem has been observed for the small ring creep test, during which the specimen shape varies, causing the reference stress to change as well. As a final consideration, small specimen creep testing techniques look very promising for determining creep properties and providing information about failure, even though further experimental and numerical investigations are needed. The current status of the techniques can be summarised as: . Provide estimates of minimum creep strain rate that show good comparison to uniaxial data; . Impression creep specimens have been used to support operational run-repair-replace decisions on components; . The small two-bar creep test specimen is the only method considered to be capable of providing a full uniaxial creep curve; . There is currently no agreed standard for the application of small specimen testing, although the definition of a SPCT standard is the most advanced; . Material is typically sampled from the surface of a component, which confers the following opportunities: -correlation against current methods for assessing the condition of the bulk component, such as surface creep replicas, hardness tests, -provide component specific in-service creep strain rates, which can be compared against the response from a representative structural model, thereby allowing a more objective assessment of component condition that accounts for the stress state, Industry practice for condition monitoring and life management A range of non-destructive inspection based assessment techniques and surveys are used during a statutory plant shutdown to support an evaluation of the condition of components operating at high temperature and pressure. These techniques do not provide a direct measure of accumulated creep damage (life consumption); however they support the subsequent residual life assessment by reference to similar components and systems on sister plant. The current assessment approach requires extensive data mining and review of large quantities of site metallurgical data, from a number of different power stations and at different times in their lifecycle. The techniques routinely used to assess the condition of components during a plant outage include: surface replication, surface hardness, ultrasonic inspection, magnetic particle inspection and physical dimension measurements using callipers and micrometres. The selection and use of these techniques are influenced by a host of considerations such as cost, perceived risk, familiarity and confidence with the techniques, plant access, accuracy, reliability, regulator preference and tradition. In addition, on-load monitoring of pressure and temperature conditions can be used to provide a generally conservative estimate of the residual creep life. This involves using sampled operational steam or metal temperature data coupled with steam pressure data, with an appropriate creep rupture expression to determine the residual creep life, which is discussed further in the section 'On-load: use of operational data'. It is standard practice to use different approaches and diverse data sets, such as outage overhaul inspection data and on-load monitoring to evaluate component condition. This is necessary because of the intrinsic scatter in the material creep properties and uncertainty in some of the load components, such as fixed support loads and reaction loads from adjacent components and systems, which can also vary as the plant is cycled on and off-load. It should be noted that modern plant design can limit the ability to deploy some of these traditional techniques due to restricted access for inspection. For example, heat recovery steam generators are designed so that it is not possible to use traditional diametral strain measurements on the steam headers. Hence, this provides a requirement to consider the use of the novel techniques to assess creep life consumption such as optical strain gauges [55], alternating current potential drop [56] and small specimen testing techniques such as impression creep, small punch and ring specimens [3,37]. With respect to the use of the non-destructive techniques: on-line strain rate measurement offers great potential to the utility, from the perspective of being able to use the information to proactively manage the integrity throughout the whole life cycle of the station and to prompt beneficial changes in operation, in good time. The ultimate aim is to use the strain rate data iteratively in pipework or component specific computational models, where the system or component model response would be calibrated against component-specific strain rate information, and compared against other relevant condition assessment data obtained during the plant outage. This would demonstrate a truly holistic approach and is considered to be the ultimate aim for optimal plant integrity management. Figure 14 shows a schematic of the current integrity management approach. Particular reference is made to the two following aspects: (1) miniature specimen testing, which could provide significant benefits to the utility with respect to maintaining safe operation, maximising availability and optimising operational and capital costs, and (2) more proactive use of onload data assessment. Inspection planning and reporting The inspection planning process is driven by the need for the utility to demonstrate compliance with PSSR requirements [1] and is applicable to both conventional and nuclear AGR pressure systems. The inspection plans are developed by a 'competent body' that can demonstrate that its staff are suitably qualified and experienced. It is common practice in the UK for large conventional fossil-fired stations to prepare a technical review document about three months before the statutory outage. This gives an overview of the previous operational history, maintenance, replacement, inspection and assessment history of all the pressure systems for the Unit. Operating AGR plants are subject to additional licence requirements [2], which for example define requirements for sites to produce periodic safety reviews in support of the stations risk management process. There is a statutory requirement for the 'competent person' to compile a written report of the outage examination within 28 days of return to service, which is agreed with site representatives and the technical inspection body. Subsequently, detailed component and system specific integrity assessment reports and safety cases follow after return to service. Off-load: outage works The following outage inspection techniques are applicable to both conventional and AGR high temperature and pressure systems. None of these techniques are used as systematic input into a predictive creep life assessment model of the piping system or other high temperature components. Pipe movement. Pipe movement is obtained via hot and cold surveys of pipe work hanger positions, an example of one type of pipe hanger set-up is shown in Figure 15. In this arrangement there are two spring loaded pipe hanger supports either side of the main steam pipe, with connections to structural steelwork above and connections to the main steam pipe via a ring (trunnion) clamp below (out of the image). This arrangement is designed to ensure that a constant load is imposed on the pipe as it moves up or down with normal plant operational transients. The hot and cold surveys provide information on pipework operational loads (between cold condition and full load hot operation), which can be compared against the design basis and prompting readjustment of hangers as required. Passive strain measurement. Bow gauge micrometres shown in Figure 16 are typically used for headers and pipework for diameters up to about 600 mm with a precision of 0.01 mm, especially when used with a more accurate locating method such as non-oxidising creep pips. It has been shown [57] that safe management of plant can be achieved with well controlled and managed diametral surveys, coupled with interrogation of other outage and operational plant data. However, a review of large datasets from periodic conventional fossil-fired plant diametral surveys show much greater than expected variability and emphasises the need for good procedural control on site if such methods are to be relied on. More recent developments to improve the accuracy and reliability of passive strain measurement include the ARCMAC high temperature optical strain measurement system [58,59]. The ARCMAC system uses stud-welded optical targets attached to the component in a bi-axial arrangement, which is illuminated from a light source within a purpose designed camera system with a telecentric lens and beam splitter arrangement. The attachment of the stud-welded optical gauge is facilitated by the use of a purpose designed gauge carrier that allows consistent gauge installation, along with the installation of a suitable protective cover. The optical arrangement and illumination ensures that accurate measurements of strain can be captured even if the camera system is not located precisely normal to the target gauge. The strain measurement resolution is ∼60 micro-strain with an error of <10%. Gauge images captured during subsequent shutdown periods enables the creep strain rate to be deduced. The authors have un-published data from validation tests at 600°C and a period of that 20 khr operation, with test periodically interrupted to obtain gauge images and thereby simulate intended use on site, which shows strain resolution and repeatability that is in very good agreement with the laboratory creep test extensometer instrumentation. Further extensions to these optical strain measurement techniques have been developed using surface speckle coatings and digital image correlation techniques to provide a non-contact surface strain distribution (across a weldment), with reference images from an adjacent ARCMAC gauge to provide a calibrated strain reference [60,61]. Surface creep replicas. These are targeted at regions considered to be more prone to creep damage accumulation, such as weldments, pipe penetrations (fillet welds), attachment welds, pipe bends and pipe terminal positions. The requirement for creep replicas is dependent on the age of the plant and the perceived risk, which may be influenced for example by adverse readings from routine passive strain measurement campaigns or known periods of mal-operation evident from reviews of operational data. It is not unusual for several hundred replicas to be taken during an outage on a single unit. The surface replica technique involves capturing the surface features on a film that can subsequently be examined under a microscope. This involves careful surface preparation by grinding the surface with progressively finer abrasive papers, with final polishing. This process should be undertaken with great care to avoid removing the creep damaged surface and also to ensure that any surface oxidation and decarburised layers are removed. The prepared surface for replica assessment typically covers a surface area of 25 × 50 mm is then etched with a dilute acid such as Nital to reveal the microstructure. A soft cellulose acetate strip is then pressed onto the surface and allowed to dry before spraying with a matt black paint. This provides a contrast under white light when the replica is removed and examined under the microscope. Carefully applied this process should reveal creep cavities that can subsequently be counted per mm 2 and classified, which is described in further detail in the section 'Case study: ageing main steam pipework'. It should be noted that different types of steel may require a modified process to the above in order to obtain the best quality surface replica. Figure 17 illustrates a surface replica of a CMV main steam line weld, taken after grinding to a depth of 5 mm from the outside surface. Two photomicrographs are shown, at ×200 and ×500 magnification, in this example a high creep cavity count was assessed at 842 cavities/mm 2 . In this case, the material had been in operation on one of the EDF Energy's conventional fossil-fired power stations for circa 260 khr before retirement and subsequent examination. The repair or return to service decision is usually supplemented by experiential knowledge, as indicated in the current condition assessment route, Figure 14. The importance of experiential knowledge should not be underestimated in the decision making process due to the significant differences in creep degradation characteristics of power plant materials in widespread use such as CMV and modified 9%Cr steels (P91). Surface hardness. These outage measurements are targeted at regions considered to be more prone to creep damage accumulation, with the key aspect being the change in surface hardness captured at successive inspections. It is custom and practice to cross-check these results against periodic trends from surface creep replicas obtained from an adjacent location. This is essentially a similar approach to surface creep replication and is frequently used during an outage to provide a large amount of data, with the key feature of the subsequent assessment being the change in surface hardness at a repeat location between periodic inspection intervals. It is custom and practice to crosscheck with other site metallurgical techniques, such as surface creep replicas obtained from an adjacent location on the component. There are a range of portable site hardness test tools available, either using ultrasonic contact impedance, direct measurement or dynamic rebound, such as the Equotip system, which uses the impact and rebound velocities to determine the surface hardness value. It is necessary to prepare the surface before testing (to remove hard scale), and these methods are frequently used on site during the outage. Various industrial studies associated with the performance (accuracy and repeatability) of these techniques have been undertaken. For the dynamic rebound systems, the characteristics are (assuming trained and competent operators and for specimens in the range 180-300 HB): . low standard deviation, . error typically ∼5%, . error is significantly influenced by the hardness of the specimen, with the error increasing with softer materials, . results can be influenced by the angle of the probe. Again, decisions on repair or return to service options that use hardness data is typically based on rate of change in hardness and experiential knowledge from hardness trends associated with other similar components, material, service age and duty. This information is not used in a predictive life assessment model. Material composition checks. This standard practice confirms that the correct materials have been installed on plant. In some instances rogue materials have been identified, hence composition checks are recommended good practice. For on-site composition checks portable X-ray fluorescence analysers are typically used. On-load: use of operational data The station routinely records steam temperature and pressure data at selected key points in the process system. The decision on the sampling interval is influenced by the stability of operation; if the unit is prone to temperature instability then the sampling interval should be reduced. These techniques are applicable to both conventional and AGR high temperature pressure systems. In addition, most stations will invariably have installed additional surface mounted or deep drilled thermocouples on key components or on components being monitored as part of a safety case. This data is stored in the plant historian and various data sampling frequencies can be defined. Typically if a plant transient is being monitored the thermocouple sampling interval may be as frequent as every 15-30 s. For longer-term creep temperature monitoring the sampling interval could be increased to several minutes. With respect to creep temperature monitoring, the decision on the sampling interval is influenced by the stability of operation; if the unit being monitored is prone to temperature instability then the sampling interval should be reduced. Hence, some judgement and experience is needed to define the required sampling interval prior to any computation being undertaken. Once this data is collected the station evaluates what is termed as the creep effective temperature (CET); defined as the average temperature at which all of the creep damage (over the monitored period) can be equated to. This approach typically uses the design pressure in the computation, although it is possible to use the measured operational pressure. However, for the purposes of the CET calculation, the design pressure is usually sufficient. For UK conventional plant designs 0.5%Cr0.5%Mo0.25%V (CMV) steels are typically used for main steam line construction. The creep rupture life calculation, using CET, gives a value of creep rupture life, which is dependent upon stress and temperature. Equation (21) is used to estimate the creep rupture life of CMV material and is based on the Manson-Brown 4th degree polynomial formula, where t r is the predicted time to creep rupture in hours, T is the temperature in Kelvin, σ = 1.25σ ref and p 0 , p 1 , … p 7 are constants, which values for the aforementioned 0.5%Cr0.5%Mo0.25%V (CMV) steels are expressed in Table 2. For a main steam line the creep reference stress, σ ref , is equivalent to the mean diameter hoop stress in Equation (22), where p is the operating pressure, D 0 is the pipe outside diameter and t w is the wall thickness. From the perspective of the station, the sensitivity to modest increases in operating conditions makes planning the scope of subsequent outages or replacement exercises fraught with uncertainty. This uncertainty triggers a natural response to conduct more sampling inspections at subsequent outages and may result in premature replacement of large sections of pipework or components. Figure 19 shows an example of a typical steam temperature trace obtained at the final superheater outlet from a boiler on a 500 MW conventional unit, with temperatures plotted from two of the four outlet steam legs (A1 and B1) during a run-up to full load. In this example, steam temperature is sampled at 1-min intervals. Since there are four main steam lines exiting the boiler on a typical 500 MW unit it is not unusual to find each of the four steam outlet legs operating at different CET's. In this example, there are a number of temperature excursions that significantly exceed the 568°C design temperature. Hence, the primary use of the CET computation is to provide a systematic process to identify operational issues and to prompt a correction. In addition, historical CET computations on units and across steam systems and components are used to support the definition of the inspection scope prior to the next statutory outage. Mean diameter hoop stress It should be noted that on current UK conventional fossil-fired stations the steam outlet design conditions are nominally 568°C and 168 bar. However, from experience it is not unusual for steam temperatures to cycle beyond 600°C, which is the limit of the CMV creep rupture models, in Equation (5). In such cases, the use of the CET data is problematical and a consequence of such adverse operation is that more 'inspection sampling' during an outage is usually stipulated in order to determine if the operational instability has manifested itself as an unexpected accumulation of creep damage or the initiation of damage (metallurgical or physical macro-cracks) at weldments. Hence, unstable temperature excursions as illustrated in Figure 19 should be minimised. Unstable operation can also result in severe (rapid) thermal transients occurring on plant, which can result in the initiation of fatigue cracks in weldments and for high temperature systems subsequent propagation via creep crack growth through wall [62] under steady load conditions. Figure 20 shows a typical stub header, with three rows of boiler tubes; Figure 21(a) shows the design of a large steam header, which is a significant class of high temperature thick-sectioned component installed in a boiler. These components are subject to very similar outage inspections and on-line CET assessments as described for main steam pipes, however, these are considered to be more complex to assess due to the numerous penetrations in the header shell and difference in stiffness between the relatively rigid header shell and the thin interconnecting boiler tubes. Steam headers ensure proper distribution of steam across the boiler space, with the boiler tubes providing the heat transfer surface within the furnace of a conventional fossil-fired boiler. There are many different designs of steam header (material and geometry) within a typical conventional fossil-fired boiler. With respect to the use of small specimen testing the critical areas on the header are usually the inter-ligament positions between the numerous header to boiler tube connections. The photograph in Figure 21(b) shows the limited access available associated at the inter-ligament positions on this particular design for material extraction. For this design the approach would be to take a reference small specimen material sample from the adjacent and accessible header shell. A computational model of the header, using on-line temperature and pressure measurements to determine the theoretical residual life at the inter-ligament positions, would then supplement this reference data. Reference to the process outlined in Figure 14 indicates how the small specimen testing and on-line data analysis could be used iteratively over an extended period of service, to establish the residual life, along with inspection data acquired during periodic overhauls. Holistic condition assessment The main objective of the station is to maximise plant availability and profitability, whilst maintaining safety. In order to achieve this objective the station must be able to understand, with good certainty, the remaining life of the assets at any time during the operation of the asset so that it can cost-effectively plan future inspections, refurbishments and replacements as required. This requirement to understand with certainty the remaining life becomes ever more acute in commercial electricity generation where the profitability may be marginal, whether caused by government or environmental policy, market prices, taxation or other factors. At the time of writing this paper these factors are all significantly influencing the profitability of large conventional fossil-fired stations in the UK. From a technical perspective, the current approach to define a 'holistic' view of the condition of the generating assets is achieved by the assimilation and deductive assessment of information gleaned from the activities described in the sections 'Off-load: outage works' and 'On-load: use of operational data'. The process is strongly dependent on the experience of the assessment engineers, who ideally will also have conducted similar investigations on other generating plant (likely of different age and condition). Hence, the current approach is considered to be 'inspection based' and usually results in an incremental increase to future inspection sample size to contain a perceived emerging threat. The current approach is captured in Figure 14. However, the current 'holistic' condition assessment process does not: . Provide a predictive life assessment beyond the next major statutory outage, which in the UK occurs every 4 years, . Actively integrate the various disparate data sets obtained from the plant outage or on-load monitoring. Figure 14 seeks to illustrate two significant missing activities, namely (1) miniature specimen testing and (2) on-load condition assessment, which could be exploited to embed a more predictive life assessment approach and thereby satisfy the main objectives of the plant owner outlined at the beginning of this section. The following case study illustrates how creep life assessments and decisions are currently achieved. Case study: ageing main steam pipework The data presented in this case study is based on conventional plant operation and focused on information and techniques currently used to assess the remaining creep life and hence decide on replacement. The through life integrity of main steam CMV butt welded pipework systems has been dominated by the condition of the weldments, which have typically been addressed by means of periodic inspection, weld repair or pipe spool replacement. Most of the integrity issues associated with the weldments have manifested as circumferentially distributed creep cavitation or subsequent circumferential cracking. This damage is affected by pipework system loads and it is not uncommon to find the observed weld damage biased (creep cavitation or depth of cracking) around the circumference of the pipe weld. Invariably at the latter stages of a stations life the location and condition of all the major welds has been established and procedures are in place to implement any needed repair and replacement activities. However, the failure of parent material is more likely to occur due to the acting pipe hoop stress, which can potentially initiate longitudinal cracking. It is very difficult to comprehensively identify those parent material regions close to creep life expiry, noting the large volume of parent material in typical steam pipe systems. As the pipe system enters the later stages of life, conducting piecemeal butt weld replacement by the insertion of new pipe spools becomes more difficult due to the aged condition of the cut faces of the original pipe section, which is typically assessed by on-site surface replication and hardness testing. This 'test' of the parent material condition, when conducting pipe spool replacements, is often used as an opportunistic indicator of parent material condition, due to the opportunity to undertake some metallurgical examination through thickness. Firstly, some background related specifically to the case study pipework example. This case study considers one of the straight pipework sections removed from steam leg B1 and the subsequent laboratory examination to investigate the distribution of creep cavitation damage on the surface and through wall. This pipe sections (and others) were removed to reduce operational risk, having been sanctioned as life expired based on surface replication results during the 2009 outage, which showed moderate to high levels of creep damage. The nominal design information for this main steam pipework is: . Outside diameter: 342 mm . Wall thickness: 60 mm . Design pressure: 173.8 bar . Design temperature: 568°C. In-service inspection history During service the CMV pipework system had been subjected to the standard outage examinations and periodic in-service CET analyses. These analyses had been undertaken regularly since 1993, typically based on a 6month block of steam temperature data from the monitored steam leg. Table 3 shows a sample of this CET data for all four steam legs (A1, A2, B1, B2) on the unit. The data presented in Table 3 is determined using the approach described in the section 'On-load: use of operational data'. Data blocks of more or less than 6-months could be used to expedite the calculation. The noted step in CET values between the monitored dates is quite normal for a large conventional fossil-fired units and is driven by changes in operation, such as the move from base load to flexible operation, and other factors such as a drive to extract more MW from each unit. This just reflects the very real commercial pressures affecting operation. The CET data collated over a number of years of service shows that each steam leg operates at CETs that can be as much as 5°C different (omitting the suspiciously low 1999 CET data for leg A1). Such relatively modest differences in CET can result, if sustained for a long enough period of time, in significant differences in creep rupture life from leg-to-leg on a unit. Figure 19 shows this leg-to-leg variation with temperature, noting that practical plant control is further complicated when the units are subjected to a high number of starts, as illustrated by past station operation in Figure 2(a). Another standard approach taken during the outage is to take diametral strain measurements. Typically, these are taken over installed stellite grade 6 creep pips, as reference measurement locations, on the outer diameter. For the case of the unit on Station 'A' the pipework creep pip locations were towards the boiler stop valve at the 160 ft level in the boiler, as well as at the 100 and 60 ft levels. Various diametral strain measurements had been taken over installed creep pips during successive outages dating back to 1993 (unit operation at 161 khr and 321 starts) and including the 2009 outage. Table 4 shows the pipe diametral measurements taken at the 160 ft level on legs B1 and B2 between 1993 and 2005 for comparison. Four diametral measurements are taken across the eight installed creep pips around the pipe circumference. The pipe diametral measurements show a generally steady increase in pipe outside diameter, with only a small number of measurement anomalies evident. Hence, the average strain rate over the period 1993-2005 (60,000 h) is ∼4 × 10 −8 h −1 , which equates to a minimum operational creep life of ∼ 300,000 h. The strain rate is determined by evaluating the diametral strain between periods and correcting for the temperature at which the measurement was taken and any micrometre zero (offset) errors recorded. Pipe diametral measurements can sometimes be taken directly across the pipe outer diameter, if this is the case additional corrections would be required to account for oxide film growth over the time period between successive measurements. The subsequent site diametral measurements are assessed using conservative residual life formulae, which are derived from strain measurements taken from a series of internally pressurised creep rupture tests undertaken by the UK's former central electricity generating board. When comparing the CET and diametral data for legs B1 and B2; leg B1 shows a lower average strain rate, but a higher CET trend when compared with leg B2. It might have been expected that the higher CET trend on leg B1 over the extended period of measurements would have led to a higher diametral strain rate. This shows the current difficulty interpreting such measurement data in isolation and the reason why other more invasive methods, such as surface replication, is used to assess material condition. During the 2009 outage the original CMV straight pipe section, identified as SM14, was removed from leg B1 for further laboratory examination; based on observed 'High Orientated' creep replica results. The adjacent pipe section immediately upstream from this removed section was left in service, with only a 'very isolated' replica assessment level assigned to the parent material. Post-service laboratory examination Both surface creep cavity and through section creep cavity mapping was carried out to industry procedures described in GENSIP guidance [63]. GENSIP refers to the UK 'Generators Safety and Integrity Programme', which is a member self-funded utility collaboration tasked with ensuring best practice is shared across the industry. The creep cavity damage levels indicated are classified in accordance with the definitions in Table 5. These classifications are provided by the technical service provider and enable them to compare observed creep damage and hence recommend a suitable course of action. The surface creep cavity mapping was taken at various axial and circumferential positions on the examined specimen (SM14 from leg B1), and is illustrated in Table 6. With reference to the pictorial representation and definitions in Figure 13, the technical service provider classification in Table 5 is more generally interpreted as follows: isolated (level 2-3), aligned (level 8-9), micro-cracking (level 10). In addition, Figure 17 shows a photomicrograph from a section of retired CMV pipework, with a creep cavity count of 842 cavities/mm 2 , in this case the classification would be judged at level 5-6 mid-to-high orientated. This exhibits quite a high creep cavity count per unit area and would be expected to progress to grouped and aligned in the short term if left in service. The German VGB guideline [64] provides a revision of the Neubauer-Wedel creep cavity classification [65] and is reproduced in Table 7. ECCC recommendations [66] on residual life and microstructure references investigations undertaken to correlate the VGB damage classification [64] against residual life for low alloy steels in the heat-affected zone of a weld. In this particular investigation VGB damage level 3 correlates to an expended life fraction of 0.691. With reference to the ex-service SM14 straight specimen surface replication (Table 6); three angular positions (135, 225 and 315 degrees) at distance 1500 mm along the pipe were selected for through section creep void mapping. Table 8 shows an example for the 315°position. Each of the through section samples was carefully aligned with the corresponding surface replica positions. A series of replicas were taken at through section depth increments of approximately every 4 mm until the bore was reached. At each 'depth' position surface Cracking Crack replicas were taken horizontally across the width of the extracted through wall specimen, resulting in a sample of approximately 15 per 'depth' position. These replicas were commenced at a distance of 2 mm from the cut face to avoid any damage from specimen preparation. The results show a trend of reducing cavity count through the section, peaking at the outer surface position. It is possible to further summarise and compare the through wall cavity count distributions in Table 8 as follows, based on the classifications in Table 5 (technical service provider) and This illustrates the somewhat subjective nature of the cavity count classifications and the importance, of where possible, having a clear understanding of the origin of the cavity count classification used during an inservice inspection and the subsequent impact on residual life. Comments The following points are evident considering the site inspection history and subsequent examination of the ex-service life expired CMV straight pipe section SM14: . Leg B1 shows a lower average strain rate based on diametral measurements compared to leg B2, but operates at a higher temperature (CET), . There are significant operational temperature differences between the main steam legs exiting a 500 MW conventional boiler, . Through section creep cavity mapping on straight pipe section SM14 (Table 8) shows a wide range of damage levels, peaking at the outer surface, which infers a significant residual life for the bulk of the parent material. These results emphasise why the current inspection and sentencing approach requires expert deduction and elicitation of various data sets in order to provide advice on future inspection plans, monitoring schemes and eventual repair and replacement options. The case study illustrates some of the more subjective aspects of the current approach to life assessment of ageing components at high temperature. The importance of the experts view on the condition of the component under examination should not be underestimated, as well as the need for comparison against other similar components elsewhere (possibly older with respect to service duty) is evident. The case study shows that the current life assessment process would benefit from more synergistic use of the data routinely collected during an outage. In addition, life assessment would greatly benefit from the more proactive use of inservice monitoring data along with the use of more predictive models for assessing residual life, which would subsequently allow optimisation of future plant operation. Assessment class Structural and damage conditions 0 As received, without thermal service load 1 Creep exposed, without cavities 2a Advanced creep exposure, isolated cavities 2b More advanced creep exposure, numerous cavities without preferred orientation 3a Creep damage, numerous orientated cavities 3b Advanced creep damage, chains of cavities and, or grain boundary separations 4 Advanced creep damage, micro-cracks 5 Large creep damage, macro-cracks components, in order to establish repair strategies and supporting life assessment. They are also used in nuclear applications and for fatigue life evaluation. Strength ranking Power plant materials age during high temperature service and periodic structural integrity assessments are needed to quantify their permissible service life. Implementing an inspection priority for welds that are subject to in-service creep damage and fatigue cracking is standard practice, with suitable inspection and repair methods being readily available to the utility. However, identifying a suitable and cost-effective inspection priority for ageing parent material is of concern due to the extensive amount of material that may be affected, and the difficulty in identifying the optimum locations and inspection sampling regime required. Understanding system or component specific creep degradation rates will enable more quantitative life predictions to be made that truly reflect the ageing of the material (weld or parent) operating under service loads and ideally should enable timely changes to operation and hence optimise serviceable life. Figure 14 emphasises the importance of the need for timely feedback on factors affecting condition of the assets to the plant operator. The use of small specimen testing is intended to provide information on component strength ranking and the rate of creep life consumption to improve targeting of subsequent outage inspections and operational improvements, resulting in a reduction in the rate of damage accumulation. If these methods are implemented on parent materials, as recommended in this paper, the expectation is that there will also be a commensurate reduction in the creep damage accumulation rate at welds. The greater use of on-line monitoring, highlighted in Figure 14, is an important complement to the use of small specimen testing. The use of small specimen creep testing provides an alternative technique that has merit when compared to conventional creep testing, due to the less invasive procedures, which do not require a significant amount of material to be removed from in-service components and subsequent weld repair [67]. Decisions on inspection, replacement and repair strategy In the open literature, many examples of plant component strength ranking have been considered in case studies [13][14][15]. In the 1990s, RWE npower carried out impression creep and small punch tests on scoop samples from two Grade 91 headers which were considered close to premature Type IV weld failures. As a reference material, a weak Grade 91 bar section was creep tested by using both small specimen and standard testing methods. By plotting the impression creep strain rates obtained from the headers samples against the strain rate of the reference material, it was found that the creep strength was similar for both materials, as shown in Figure 22. In the same plot were also reported the strain rates for a prematurely failed endplate and two other samples taken from plant items with no known problems. Cross-weld specimens were also creep tested from the reference material, specifically welded, in order to acquire Type IV data. The test results led to an early inspection of the headers, one of which was found to be widely cracked, while the other header cracked a few years later. Hence, deployment of the non-conventional small specimen creep testing methods avoided costly failure in service and associated unplanned outage time and allowed life extension of the headers [14,67]. Other tests were carried out by RWE npower by sampling main steam pipework components with the intention to identify the damage at an early stage, the results of which are plotted in Figure 23 that shows impression creep strength ranking for CMV. The ISO lower bound line intersects a sample which parallel conventional uniaxial testing has shown possesses a strength equivalent to the lower bound (mean-20%) ISO value for this material in the as-received condition [15,67]. By testing a significant number of ex-service CMV samples an empirical correction factor was found as well, in order to compare components of very different plant ages. The histogram in Figure 24, shows the CMV data corrected for operating hours at the time of sampling to reflect strength at the start of life, is helpful in placing any other CMV sample within the creep strength scatter band [67]. Although small specimen testing techniques have been proved to be extremely useful in the creep strength ranking, their standardisation is still required and the effects of the Figure 22. Impression creep strain rates of plant items ranked in order of decreasing MSR, obtained from samples from the suspect header compared to a sample from a weak failed endplate and samples from other components, from Hyde and Sun [67]. scoop sampling on the creep response of components still needs to be investigated [67]. Small specimen creep data correlated with hardness data for life assessment The research into the correlation between room temperature hardness data and time temperature parameters has been ongoing since 1943 [68,69], but no physical link nor any explicit relationship between them has been found. Many researchers have related creep life evaluation and hardness data through the Larson-Miller parameter, but this approach only allows a first assessment of the damage if the initial hardness, at a time of 0 h of creep, is known [70][71][72][73][74][75][76][77]. Unfortunately, it is very rare for a station to have record of this initial hardness data and as a consequence, the method is suitable only to provide an initial idea of the component damage. Currently, hardness data continues to be routinely collected during plant outages because of the simplicity of the test method and the potential to provide an indication of the condition of the material in contrast to the evaluation of the minimum creep strain rate, but more research is needed to evaluate the benefits. Creep and hardness data are not correlated by any mathematical model because of the different parameters they are related to, since creep life is a function of the operating temperature and stress, while hardness, after prolonged service, is mostly related to the thermal aging at the operating temperature [6]. Nevertheless, Brett [12] found that by plotting the variations of minimum creep strain rates obtained by both uniaxial and ICTs at 600°C and at a stress of 155 MPa, against room temperature hardness for Grade P91 steels (with different service histories) the samples with the lowest minimum creep strain rates have the highest hardness (Figure 25(a)). Brett [12] also plotted the time to failure against the hardness (Figure 25(b)), and found that the samples with the highest hardness have the longest time to failure as well, where the time to failure for the impression test samples were calculated by using the Monkman-Grant relationship [6,74]. Hence material hardness data, which are routinely captured on ageing materials during statutory outages, is a characterising parameter that should be further researched in order to establish if it could be more deeply integrated into a life prediction model. The variability associated with site acquired hardness data should be considered in such research. Figure 26 provides an illustration of the correlation between hardness (HV) and creep replica assessment level obtained during an outage on 200,000 h old CMV parent material, and is taken from two main steam lines (legs A1 and A2) exiting the boiler on a conventional fossil-fired unit operating at a design temperature of 568°C and pressure of 170 bar. This sample comprises 79 assessment locations from leg A1 and 113 from leg A2, with a mean hardness of 123 on leg A1 and 126 on leg A2, which compares to a start of life mean hardness of 170 Hv. In this data sample, the creep replica is assessed at seven levels as defined in Table 9; these assessment level definitions are from the same source as those shown in Table 5. The data has been obtained from a range of locations on the main steam pipe system and from a mixture of pipe bends and straight sections, but predominantly straight sections, hence these locations will be exposed to a range of in-service stress levels and temperature histories, Figure 19. Inspection of hardness data at creep replica assessment levels 2, 3 and 4 reveals very similar mean hardness values of between 123 and 125 for these datasets. The section 'Off-load: outage works' outlined that typical hardness measurement accuracy of ∼5% can be expected but with an increasing error on softer material. Hence, it is evident from this sample of data collected from ageing CMV parent material that there is no definitive correlation between the surface replica assessment levels and hardness data. The data presented for leg A1 in Figure 26(a) could be interpreted as indicating a potential reduction in hardness with increasing creep cavity count, however this is not evident in the data presented for leg A2 in Figure 26(b). Applications for irradiated materials Understanding the effects of neutron irradiation on the performance of materials used in the construction of nuclear power stations is a challenge when considering the requirements for life extension of existing reactor designs and future advanced fusion reactor technologies. The use of small specimen testing to characterise materials behaviour has many advantages in this respect due to the small volumes of irradiated material required. The ASTM symposium [78] in 1983 provided an initial milestone for the development of these methods for irradiated specimens. This highlighted some of the important considerations that have influenced the development of the testing methods, such as limited space in hot test cells, the effect of neutron fluence gradients across the reactor components, personnel dose levels in post-irradiation testing and the development of materials suitable for use in high energy (∼14 MeV) nuclear fusion reactors. The following sections review the more prevalent uses of small specimen testing on irradiated materials and related to fracture strength, material property evaluation, behaviour and condition monitoring of in-service reactors. In-service condition monitoring Currently the SP testing technique has gained an important role for monitoring nuclear power plant operation and critical areas, such as the heat affect zone of welds. In particular, the material state of all Figure 25. (a) Variations of minimum creep strain rates at 600°C and at a stress of 155 MPa, with room temperature hardness for Grade P91 steels (with different service histories) from Brett [12], and (b) variations of uniaxial rupture life, t f , at 600°C and at a stress of 155 MPa, with room temperature hardness for Grade P91 steels (with different service histories), from Brett [12]. Slovak reactor pressure vessels is monitored through SP testing (several thousand material samples) as part of the advanced surveillance specimen programmes, which aims to extend the reactor pressure vessels life in the WWER-440 pressurised light water reactors by at least 20 years [79,80], with typical operating conditions at 297°C and 123 bar [81]. The focus of this surveillance programme is to provide a through life assessment of the tensile strength and ductile-brittle transition temperature at various locations within the reactor, with material extracted using the Rolls-Royce SSamTM-2 device with comparison against reference material blocks. Sampling locations are defined with care so that there is no requirement for subsequent repair or monitoring. Some of the small specimens have been irradiated in the Halden research reactor at a fluence of 4.02 × 10 23 n m −2 [79], which showed an expected increase in the tensile and yield stress properties. Comparisons between the fracture appearance transition temperature values obtained from the SP tests and standard charpy tests provided good agreement between material in the initial state and the irradiated condition. This application of SP testing provides the Slovak regulatory authorities with contemporary information on the change in material properties due to in-service operation and irradiation levels. This application of SP testing is similar to the use of impression creep testing applied to ageing conventional power plant materials subject to thermal creep by Brett et al. [12][13][14][15]. Importantly the examples provided by these investigators show practical examples of the techniques use to assess the condition of operating plant, driven by regulatory and safety requirements. Material properties Hyde et al. [3] produced a comprehensive review of the use of miniature specimen testing for mechanical and creep material properties. This importantly concluded that viable test methods exist; however, there is no single specimen design that can cater for all data requirements. It is evident that commercial objectives have driven the development of some of these techniques, such as SP and impression creep and on specific materials of interest on operating plant, such as CMV and P91 steels on conventional stations. Hyde's [3] review provided a perspective on future requirements and emphasised the need for greater standardisation of experimental test methods, which is still the case as discussed in the section 'Codes and standards for small specimen creep testing techniques' in this current work. Small specimen testing is perceived as a key approach for obtaining critical information on the mechanical properties, deformation and fracture behaviour of candidate materials in support of the ongoing design and development of fusion reactors. The challenges associated with the development of the international fusion materials irradiation facility (IFMIF) to test reactor materials at high fluxes of 14 MeV and temperatures in the range 250-550°C, is described by Knaster et al. [82]. This test facility is expected to be ready and available to use within 10 years from project approval and is constrained for test volumes of up to 500 cm 3 . A range of small specimens have been irradiated as part of the engineering validation and engineering design activities phase, with subsequent tests related to tensile data, fracture toughness, creep and fatigue crack growth, thereby demonstrating the practical set-up of the irradiation capsules. However, one of the major challenges cited by Knaster et al. to overcome is ensuring adequate validation of the derived irradiated properties to the regulatory bodies. This again emphasises the need for accepted standards and codes of practice for all the deployed small specimen test methods and in readiness for the high flux IFMIF facility once constructed. The further validation of these test methods on ageing materials on operational nuclear or conventional stations will certainly support this objective. Lucas et al. [83] provides some examples of the developments required related to improve understanding of the fracture, fatigue and deformation behaviour. Fracture toughness tests on a range of specimen sizes on candidate reduced activation ferritic-martensitic EUROFER97 steel are described, with the use of correction factors derived with finite element simulations of crack tip stress fields to address specimen size effects. These tests show the requirement for close scrutiny of the mechanics of the testing process coupled to the observed material behaviour. Knaster et al. [82] emphasised the ultimate requirement for validation of material properties derived from small specimen tests in support of the design of fusion reactors. The challenge is to establish material behaviour models that use the measured parameters and observed behaviour of the materials tested; a mechanistic modelling approach. These material behaviour models must be demonstrably scalable to account for the size of the reactor components and expected in-service loading due to neutron fluence gradients, thermal gradients and environmental factors such as the presence of helium produced by the fusion process, which will influence the fracture behaviour [83]. Examples of the potential to model SP tests with a finite element simulation is provided, with respect to the evaluation of the deformation behaviour obtained in conjunction with hardness testing, although the potential effects of thermal, environmental ageing or stress state effects are not explicitly captured. Hardness data is routinely acquired during periodic inspections of ageing conventional plant and is usually interpreted in conjunction with metallurgical evaluation by creep replicas. There are some material models in use on conventional plant that couple surface hardness data with the Von Mises stress to evaluate time to creep rupture. These models typically are used to supplement safety case assessments on operating plant and hence are necessarily used in conjunction with several other methods described in the section 'Industry practice for condition monitoring and life management'. Nonetheless, it is evident that such hardness models (possibly in conjunction with other measured parameters) may be able to be further developed to a position where a practical forward-looking lifting model can be routinely implemented on operational plant. Various options for modifying Liu and Murakami's [47] constitutive creep models are currently being investigated by the authors of this paper for CMV and P91 steels that account for the change in surface hardness and the stress state. The hardness data used in this model development is based on experimental tests and examination of periodic hardness data collected over several inspection campaigns on operational plant. The assessment framework in Figure 14 illustrates how small specimen testing should be used in conjunction with data acquired from existing inspection procedures along with material or component behaviour models that utilise in-service operational data. The authors of this paper and co-workers in the EPRSC sponsored Flex-E-Plant consortium are actively developing these integrated models for conventional plant applications and on materials such as P91 steel, which is similar in composition to materials being evaluated for fusion reactor applications. Reduced activation ferritic-martensitic steels such as F82H are candidates for use in fusion reactors [84] and have been developed from modified 9Cr-1Mo steels (by replacing Mo and Nb with W and Ta), which are used extensively in modern conventional power stations. In 2014, Bruchhausen et al. [85] assessed the behaviour of ferritic-martensitic oxide dispersion steels by use of SP testing at 650°C complemented by uniaxial creep tests. Importantly the SP tests were conducted in accordance with the current CEN SPCT code of practice [5]. The SP failed specimens were extracted from a material block in the transverse and longitudinal directions and the micrographs of the failed specimens exhibited very different crack patterns, which are indicative of the anisotropic alloy microstructure. The evident strong anisotropy is considered due to scatter in the steady-state creep response and influenced by the bi-axial stress field active in the SP test and variability in the alloy's microstructure, which was also evident from other uniaxial creep tests on conventional specimens. These results illustrate again the requirement for comprehensive modelling-simulation studies of the SP and other small specimen tests in order to inform the relevant code of practice. It is important to adapt this integrated test-modelling approach to the situation of in-service components and the inevitable regulatory requirement to demonstrate fitness-for-service in a production reactor. The requirement for integrated small specimen experimental designs to provide fundamental material behaviour models is reiterated in Odette et al.'s review on innovations in small specimen testing [86]. One of the challenges of demonstrating the validity of selected materials used in a production fusion reactor is to account for the effects of the neutron fluence and thermal gradients in the structural components. Gilbert et al. [87] have reviewed some of the factors influencing material behaviour such as neutron flux gradients, component size and helium production on the design of a proposed demonstration fusion power plant. The study shows an expected spatial variation of the neutron flux across various structural components, which is also evident in other reactor designs associated with naval pressurised water reactors. For these naval reactor designs the through life effects on material behaviour due to the spatial variation in irradiation creep and swelling is accounted for by measuring the deformation of representative fuel coupons irradiated in research reactors, supplemented by measurements of end of life fuel deformation of spent reactor fuel modules. These experiences with the effects of spatial variation of neutron flux in naval reactors, coupled with the very high energy levels expected in production fusion reactors only further emphasise the need for standardised small specimen test codes of practice. On conventional power plant an analogy can be made with the change to flexible operation in the 1990s (see Figure 2(a) and the significant increase in annual unit starts). This resulted in large periodic thermal stress gradients acting across thick-sectioned components, resulting in critical integrity problems [62]. The older conventional stations are still suffering from the effects of this type of operation, which coupled with the long running hours has driven the development and field application of small specimen testing methods described in this paper. Fracture Understanding the long-term fracture behaviour is critical to ensure the integrity of irradiated structures. The SP testing technique has undeniable advantages due to the small irradiated specimen volume and availability of a recognised code of practice [5]. Material properties of the heat-affected zone of welds, and the DBTT can be determined by the use of the SP testing technique [79,88]. SP testing technique was first used by Baik et al. in 1983, who evaluated the DBTT of a Ni-Cr steel and established a linear relationship between small punch and standard Charpy V-Notch test results in terms of DBTT [89,90]. Mao and Kameda [91], by observing SP test data, found that neutron irradiation significantly increases the DBTT in Fe based alloys doped with Cu, while Wakai et al. proved that the production of helium can cause the shift of DBTT in F82H steels [92]. According to Kasada et al. [93], the master curve (MC) methodology, developed by the American Society for Testing and Materials [94], has been successfully used by employing small specimens in the assessment of the drop in fracture toughness caused by neutron irradiation. More recently, Martínez-Pañeda et al. have proposed a novel methodology which involves a notched small punch specimen for the evaluation of the fracture toughness [95]. In particular, they determined a linear relationship between the notch mouth opening displacement and the vertical displacement of the SP, and a critical value of the notch mouth opening displacement of the SP, that can be compared with the corresponding parameter in conventional fracture tests [95]. Other applications There are a variety of other applications to note such as, screening of materials, assessment of hydrogen embrittlement, supporting fatigue studies and investigating very high temperature performance of materials such as single crystal alloys used in gas turbine applications. These are all derivatives of the methods discussed previously and are discussed briefly in this section. One of the other aspects to note in this work is that many of these small specimen tests require the development of novel measurement techniques due to factors such as specimen size and geometry, requirements for hot cells in nuclear applications or for testing in combustion atmospheres for conventional plant applications. These may present requirements for the use of full-field optical based noncontact measurement systems such as digital image correlation, infra-red thermography, laser extensometry etc. In 2013, Methew et al. [96] demonstrated the benefits of using the SP techniques to rapidly investigate the creep performance of 316LN stainless steel with different nitrogen content levels, using the technique as a means of differentiating the creep rupture performance between alloy heats. Although the SP technique provides the ability to undertake multiple tests quickly the study emphasised the potential for shortfalls due to the short duration of the tests, typically <1000 h and the inability to assess effects due to longer-term thermal ageing and environmental factors. This is a common problem associated with small specimen tests and emphasises the opinion proposed in this paper that the application of small specimen testing in a life assessment framework requires periodic sampling as the material ages in-service. Hydrogen embrittlement of steels is a particular concern for materials employed in hydrogen-powered vehicles and offshore platforms, as well as in power plant components [97,98]. Furthermore, hydrogen can be added to materials during the manufacturing process [99]. Therefore, investigating the effects of hydrogen embrittlement is of great interest. One of the latest studies on this matter has been conducted by García et al. [100], who investigated the effects of hydrogen embrittlement on the tensile properties of three different CMV steels (parent material, welding, and heat-treated welding) through small punch tests. They found that the yield strength of the steels can be assessed by the SP test yield load and that the technique is able to evaluate the degradation of the material tensile properties due to the hydrogen environment. Fatigue studies are crucial in many engineering fields and especially in the design of nuclear reactors because of the temperature and neutron flux cycles they are subjected to [101,102]. Hirose et al. [101] and Nogami et al. [102] for example describe the development of small-scale fatigue specimens, with a specimen gauge diameter of 1.25 mm for testing reduced activation ferritic-martensitic steels in nuclear applications. These cyclic load test present challenges for test machine alignment, load control and measurement of extension, especially for applications to irradiated specimens in a hot cell. Validation of these small-scale fatigue tests for full-size components is a challenge, due regard to the potential in-service load cycles (strain range and hold periods) and operating environment is required as well as the impact of thermal or environmental ageing that might be expected to occur in an operating reactor. It is envisaged that such small-scale tests will be invaluable as a means of assessing for example the performance not only of the base material but also the welding fabrication process and therefore reducing the overall time and cost to develop a material that is fit for purpose. Therefore, the development of new testing methods using small specimens is of great interest in current research [101][102][103][104]. Roebuck et al. [105] have developed a novel miniature specimen testing facility use for assessing the high temperature properties of Ni-base superalloys such as CMSX-4 used in the manufacture of gas turbine hot gas path components. This facility operates at temperatures above 1000°C, includes water-cooled grips and has been used to investigate flow stress dependence on strain and temperature, resistivity measurements, oxidation, pyrometry measurements and recrystallisation kinetics. The test facility also comprises an environmental chamber and is equipped to apply high heating and cooling rates and variable thermomechanical loads. Single crystal superalloys such as CMSX-4 are subject to very arduous operating conditions in an operational gas turbine engine and often manufactured in relatively thin sections, hence the need for miniature specimens and suitable test facilities to determine properties and characterise their behaviour. Plant assessment model adopting small specimen creep testing in a condition-based monitoring framework The section 'Industry practice for condition monitoring and life management' and Figure 14 illustrate the current condition assessment process, which has reliably been used in the UK for many years. This approach results in an increasing inspection and assessment scope as the plant ages, until a point at which the economic life is reached and replacement is advised. The development of this staged approach to life management has been heavily influenced by the periodic inspection, condition assessment and remediation of defects and damage in welds, with the assessment of the condition of 'parent' material following later in life and primarily influenced by the utilities desire to manage their requirements for large capital spend on plant refurbishment. As an example, for high temperature CMV main steam pipes on conventional plant this staged approach is typically applied as follows: . Stage 1: includes first weld inspections after 50 khr of operation; . Stage 2: first parent material checks (bends) initiated at circa 130 khr; . Stage 3: initiation of planned inspection of straight pipe sections at circa 190 khr, with first pass of bend inspections completed; . Stage 4: economic replacement scheduled at around 260 khr. The case study presented in the section 'Case study: ageing main steam pipework' shows that considerable effort from inspection bodies is required throughout the life of the plant to understand the rate at which the material deteriorates until component replacement is required. This has shown the uncertainty that the plant owner is faced with, due to the lack of consistency in inspection data trends. In broad terms, the generating plant operates on a 3-to 4-year ahead operating ticket, whereas ideally the plant operator would prefer a longer-term 8-10 year forward prediction of the rate of life consumption to facilitate economic decisions related to capital investment and optimisation of periodic statutory inspections. In addition, a more predictive assessment of life consumption would provide the plant operator with the opportunity to optimise operation such that the economic lifetime of the assets can be safely and cost-effectively extended. The optimisation of life consumption (to minimise outage inspection scope and prolong asset life) is worth multimillion pounds over the life of a station. Figure 14 illustrates the current IBA procedures, with the inclusion of small specimen testing and onload condition assessment highlighted as areas that warrant further development and implementation. The development of on-load condition assessment methods is not the main subject of this particular paper; however, the authors are involved in other complementary research to develop these approaches for plant based applications [106]. In an example of this approach for the assessment of main steam bends [107,108], routine data assimilated during current plant monitoring and outage inspections is used seamlessly in computational models to rapidly assess current condition and predict remaining life. This methodology is being further explored and developed as practical 'tools' within the UK Flex-E-Plant research consortium related to the analysis of main steam pipe systems on conventional plant. Small specimen creep testing provides the opportunity to remove some of the uncertainty associated with the site outage inspection and assessment processes currently used, illustrated by the comparison between surface hardness and creep replica assessments acquired from outage inspections in Figure 26. Based on the current maturity level of the testing techniques outlined in the section 'The role and application of small specimen creep testing', it is proposed that the most practical approach to consider at the moment is the further development and deployment of the impression creep method, particularly since it has been exploited on ageing plant by Brett [13][14][15]109] for purposes of component risk ranking and condition assessment. The extensive programme of sampling undertaken on Slovakian reactors [79,80,110] is also noteworthy and is another example of the proactive and systematic use of these small specimen testing techniques. It is worth noting that other methods such at the two-bar creep test specimen may provide additional benefits due to the potential to model the tertiary phase of creep, however further work on miniaturisation is likely to be required. The key point to note is that more precise information is required on the in-service rate of material ageing and any subsequent acceleration of the ageing rate that might indicate that repairs or replacements are required. Implementation of small specimen testing The intention is to provide more reliable and periodic measurements of material properties throughout the operating life of the asset. This approach can be applied to both conventional and nuclear generating assets and will allow the plant operator sufficient time to implement positive corrections to operation that maximise the useful life of the asset. Data obtained from these small specimen tests can be used to: . Amend and update relevant pipeline system and component life prediction models, . Advise plant operators on creep strain rates and changes in material behaviour, hence allow operating conditions to be optimised with respect to the plant owner's requirements for economic asset life, . Influence and optimise the inspection scope at subsequent statutory outages, . Provide a more informed view on the loads acting on complex regions such as welds, complemented by expert review of outage findings from weld inspections and metallography, thereby improving the management of weld integrity, . Enable some current non-productive site examination methods to be gradually phased out or improved. In order to provide useful information to the plant operator, results from small specimen testing must be acquired at a suitable time in life to enable appropriate modifications to plant operation to be implemented. Using the current CMV life management methodology summarised previously as a reference for current practice, the following modification to the current approach is proposed for high temperature pipework. This essentially provides an earlier baseline condition assessment from targeted sample locations, supplemented by interrogation of on-line monitoring data. Subsequent statutory inspections provide periodic updates on the condition of the in-service ageing materials. Importantly, this modified approach gives the operator more opportunity to optimise how they can effectively manage the plant. The proposed modified approach is as follows (weld inspections would start initially as per normal practice at 50 khr): . Stage 1: Develop full pipeline model(s) of a lead system, which allows estimates of creep strain rate based on design values for material creep models, hanger survey data, plant temperature and pressure monitored data. Ideally, pipeline model(s) developed by ∼75 khr and used to proactively support future outage inspection planning. This essentially provides the baseline or reference model(s) for the plant. . Stage 2: Start to acquire small specimen samples from lead pipeline(s). Specimens should be obtained from straight pipe sections that are adjacent to bends proposed to be first inspected at ∼100 khr and from positions that will not require subsequent repair or additional monitoring. Update pipeline model(s) accordingly and compare predictions against outage inspection findings. Review and update future periodic inspection plans (including weld inspection plans). . Stage 3: Select additional targeted small specimen samples to further assess and optimise pipeline model(s), thereby supporting continued safe operation towards end of life. The selection of suitable locations for additional material sampling would be informed by assessment of previous inspection findings. Review and update periodic inspection plans. . Stage 4: At an economic point, plan to implement replacements. It is important to recognise that there is a cost to implement small specimen testing, however it is anticipated that this will be offset by cost reductions elsewhere, such as: . Reduction in the use of diametral strain measurements, . Optimisation of outage site creep replica and hardness sampling and analysis campaigns, . Optimisation of outage weld inspections, Concluding remarks This paper has focussed on the application of small specimen testing methods with an emphasis on their place in a modified and practical life assessment framework that could be applied to both conventional and nuclear generating assets. Examples have been provided on current practice associated with invasive site inspections and material condition assessment towards the end of economic life on conventional plant pipework systems. A previous review [3] has described the use of small testing methods to satisfy specific material property and data requirements. The exacting demands associated with the use of small specimen testing methods in nuclear plant applications strongly emphasises the importance of developing agreed codes of practice and standards. The successful development and validation of alloys under consideration for fusion reactors demands the application of standardised small specimen test methods. Importantly, the experiences described in this paper on the successful application of both SP and impression creep techniques to support operational plant gives a clear indication of their potential use in the field. There are of course still a number of challenges to overcome before their use can be more widespread to assist the plant operator. The case study example provided shows how the current methods for assessing the condition of materials via hardness and surface replica surveys have some scope for improvement, ideally towards a position where more mechanistic material behaviour models can be applied, which would be regularly and seamlessly updated with operational and inspection data. Information from the application of small specimen testing campaigns can support this objective, but only if applied in accordance with a timely and structured approach as described in this paper. The development of validated small specimen testing methods will also require complementary innovation of experimental measurement methods, facilities and data interpretation. Developments to improve measurement methods applied in experimental tests may well result in useful applications on operating plant. The use of laser based measurement systems is now accepted and applied (for measuring component displacements) to assist plant investigations associated with premature failure of welds. High resolution and high temperature optical strain measurement systems have been taken out of the laboratory and engineered for site applications [55]. Features of these high temperature optical strain measurement systems have recently been used to assist the first site application of creep damage sensors [56]. There are also examples of the first trials of noncontact full-field strain measurement on high temperature plant using digital image correlation [60] and the site application of thermographic approaches for defect identification and residual stress measurement using infra-red camera systems [111]. The power industry is facing severe challenges as the economy transitions to a low carbon future. This puts much emphasis on conventional and nuclear AGR plant continuing to operate reliably, cost-effectively and with a focus on life extension. This paper has presented the current practice for life management of high temperature and pressure systems and proposes that small specimen testing methods should ideally be used earlier in the plant lifecycle. The current inspection based approach for life management is unable to provide sufficiently rigorous future life predictions that allow the plant operator to efficiently optimise plant operation and reduce through life inspections costs. It is recognised that developing rigorous predictive life assessment models is not trivial; however targeted use of small specimen testing methods described in this paper and within a new life assessment framework that also exploits the capabilities offered by modern computational techniques, provides an opportunity to achieve this objective. To support this aim there is a critical need to improve the exploitation of extensive plant data acquired during outages, along with online data routinely captured during operation. These challenges are being addressed as part of the remit of the EPSRC funded Flex-E-Plant consortium and it should be acknowledged that some aspects of this challenge have been addressed in other recent research and plant trials; for example: . Plant applications of impression creep testing for risk ranking components [35,38], . The development of computational models for main steam pipe bends that use plant outage inspection data as model inputs to provide forward predictions of creep life consumption based on measured operational loads [42], . Ongoing development and trials of creep damage sensors on operating plant [17]. Hence, it is the authors' belief that the development and practical use of such predictive life assessment tools within the proposed new life assessment framework is now both cost-effective and achievable. The ultimate aim is to develop 'point of application' material models that can seamlessly use in-service metallurgical, inspection and operational data and provide the station operator with a timely and predictive life assessment capability. These aims are applicable to both conventional and nuclear generating stations. Moreover, the arduous operational duty experienced by conventional generating plant provides a useful source of both metallurgical and operational experience to assist the cost-effective life management of existing nuclear AGR plants and could support the approach to life management of materials used in next generation nuclear reactors.
2018-12-28T01:45:41.306Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "8a41b03f84b06db44402cf7a72b9f2b04d96bd5f", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/869062/The%20Role%20of%20Small%20Specimen%20Creep%20Testing%20-%20Accepted%2008%20May%202017.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4b0d865eaf2f84aceac900a72b20854be1ab161f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
225048218
pes2o/s2orc
v3-fos-license
An experimental study on performace of starch extracted from wheat flour as filtration control agent in drilling fluid Received Dec 23, 2019 Revised Feb 2, 2020 Accepted Jun 5, 2020 The phenomenon of lost of mud filtrate into a porous permeable formation due to high hydrostatic pressure compared to the formation pressure is known as fluid loss. This cause some major problems in well during drilling as poor cementing job, pipe stuck, and formation damage. Thus, to safe the well from such problems and in order to make safe and effective drilling an additive from wheat flour is extracted which is starch, and acting as a fluid loss control agent. The purpose of this research is to investigate the potential of utilizing this additive to form environmentally safe, non-toxic, high biodegradability and low-cost water-based drilling fluid samples with varying the amount of starch. Experimental results showed that Efficiency of starch obtained from wheat-flour is showing increment in rheological properties as compare to starch present in market by using same and varying quantity of both and observed that wheat-flour starch is more efficient as compare to starch in market. On the other hand, the efficiency of starch is good but it has been also improved by the extraction of starch from wheatflour by the centrifugation process. INTRODUCTION The Drilling fluid has the obligatory properties like carrying out rock cutting towards the surface, cleaning & cooling the bit, decreasing resistive forces, stabilizing wellbore & to prevent fluids that flow from pores into the borehole. The various methods for designing suitable drilling muds are developed for avoiding complexity of problems encountered during drilling operation. Since the initial operations that were executed in USA (which consist of clays and water for use) to complicated blends of distinct particular natural &artificial materials which are currently used. The drilling mud should be user friendly, cost effective and economically viable. Therefore, drilling muds are basically formulated to decrease the effect of damage and to ensure the possibility and economically viability of rotary drilling in hydrocarbon containing formations. The filter cakes which are formed after the intrusion of drilling mud in the pore space of pay zone are compressible and they contain variable porosity and permeability characteristics, with low void spaces at the filter channel surface and maximum void spaces on cake surface [1]. In order to reduce filtrate invasion, fluid loss additives such as organic polymers are used which prohibit water invasion into formation. During the formulation of a mud in addition to microscopic structure and composition of filter cake related to it and the information of the characteristics of filtration, are of main importance [2]. During drilling and completion varying drilling muds in the bore-hole are used. The most significant factor for the increasing the production is the physical and chemical compatibility of the mud with the reservoir rock. By the formation damage these muds can reduce the productivity of the well by invasion. Consequently, those additives are used i.e. CaCO3, which can reduce the chances of these damages in the formations by forming a filter cake of low permeability (optimum thickness) that reduces further invasion of solids and filtrate in the pore spaces of rock. After drilling these cakes are washed by for maximizing the flow in the wellbore [3]. Fluid loss and viscosity of mud are the important factors which must be investigated throughout the drilling of a well. For that mud is treated with several types of additives i.e. different polymers and chemicals, to achieve the requirements important for the particular well such as rheology, control of fluid loss, and weight of mud etc. Starch and calcite are the most important materials used to control fluid loss and to increase the weight of mud by forming mud cake respectively [4]. RESEARCH METHOD 2.1. Drilling fluid preparation and properties Few of the additives along with Barite BaSO4 and Calcite CaCO3 are most commonly used in waterbased drilling mud. Mainly three main factors which influenced the performance of drilling fluid are i.e., density, viscosity and filtration of drilling fluid. Therefore, more consideration for these factors during experimental work for the preparation of water-based fluid samples should be paid [4]. The fluid samples preparation at laboratory scale is obtained by the applying this methodology that any material of 1 g added to a 350 ml laboratory barrel at the standard level, that is relevant to addition of 1 pound of material to a fluid of 1 barrel. This study is involved the preparation of four water-based drilling fluid samples which contain Bentonite as a filtration controller and viscosifier, for pH control caustic soda is used, and barite is used as a weighting agent. Along with them soda ash and Xanthan gum are also used as a hardness and rheology control materials respectively. The composition of all these additives is constant for all prepared samples except starch. Then the most important is the concentration of starch as a filtration control agent is used to prepare samples of densities of 10.5 PPG. One of the samples labeled as sample no.1 is based on pure starch and the rest of the three samples are based on starch extracted from wheat-flour with varying amount as the compositions of prepared samples shown in Table 1. Density of mud Density of the mud is the main parameter to consider during study and before the drilling process as it directly affects the properties of filter cake i.e. damage to the formation and filtrate loss. An apparatus named as mud balance is used to choose such weight of mud in ppg that cause ceramic disk to less damage. The most common additive to increase the weight of mud in production zones is mostly calcite (CaCO3) and barite (BaSO4) to a less extent used in drilling fluids in hydrocarbon industries [5]. Four drilling mud samples by formulating both starches are prepared in this study to achieve the density range of 10.5 ppg. The amount use for the formulation of pure starch-based sample is 0.40 g, and the amount for the formulation of wheat-flour based samples ranges from 0.35 -0.45 g. Drilling fluid rheological properties The flow of matter or the fluids deformation is involved in the study of rheology. The importance of rheology is acknowledged during the fluid flow (velocity profiles) analysis, viscosities of fluids (apparent, plastic, and marsh funnel viscosities), cleaning the annular borehole and the losses of friction pressure. The basis for all the investigations including hydraulics of wellbore and to evaluate the mud system functionality is the rheological properties. Gel strength and yield points of the fluids are also included in rheological properties. Mud rheological properties i.e. Gel, viscosities, yield point, and density are continuously tested throughout the operation of drilling a well. Mud rheological properties are very critical to maintain and control during the failure due to result in time, financial loss, and in excessive cases, the result is in the abandonment of the well. Filtration, pH, chemical analysis (alkalinity and lime content, chloride, calcium, etc.), and resistivity apart from rheology are also tested throughout the drilling of a well [6]. In drilling fluid laboratories as well as on rig site, rotational viscometer is frequently used to measure the rheological properties of mud. The readings are taken on 600, 300, 200, 100, 60, 30 and 6 rpm (rotation per minute). Later these readings are plotted on a chart of shear stress and shear rate which are used to determine viscosity and appropriate viscosity model. Bingham plastic fluid To identify the properties related to flow for the different mud types Bingham plastic model is a basic two-parameter model used in drilling industry widely. It is known to as most common model for fluid to estimate the non-Newtonian fluids rheology. Shear stress is a straight-line function of shear rate which is the basic supposition of this model. Yield point or also named as threshold stress is the point where shear rate is zero. By the reduction in colloidal solids plastic viscosity (PV) is achieved best and as low as possible for drilling fast. While, to carry cuttings out of the hole, yield point (YP) should be high enough, but not as much that pressure of pump would be unwanted when flow of mud initiated. On the behalf of treatment for mud adjustments in YP is done. For the both low and high shear rates range Bingham plastic model have its own limitations. The physical/solid reason behind this behaviour is that the liquid generally contains particles (clay) or large molecules (polymers) which generally have some kind of interaction, while creating a weak solid structure, formerly known as a false body, and at that point a certain amount of stress is required to break this structure. Under viscous forces the particles tend to move as the once that structure has been broken. Again, the particles are linked, by removing that stress [7]. Those results which are acceptable for a drilling mud diagnosis are produced by Bingham plastic model. But, for hydraulic calculations its accuracy is not that much. A Bingham body doesn't begin to flow until a shearing stress, corresponding to the yield value, is exceeded. The results for Bingham Plastic Model are obtained by plotting graph between shear stress and shear rate as shown in Figure 1, which are calculated by using the shear stress and shear rate (5) and (6). Drilling fluid ph determination The importance of pH is cleared by knowing this that it affects the solubility of the organic thinners, contaminant removal, corrosion mitigation and the dispersion of clays presents in the mud. For hydrogen ion concentration representation of a mud the vital role is played by pH. pH is mostly used to express the drilling fluid especially water-based mud acidity or alkalinity. Generally, a numerical value ranges from 0 -14 are used for the representation of pH of any fluid which indicates that the hydrogen concentration in the fluid is an inverse measurement [4]. pH is expressed by the equation given follow: Where, H is the hydrogen ion concentration in mol. The pH value decreases according to the formulae as the acidity of the fluid increases by the addition of more hydrogen atoms. Generally, pH of the fluid is considered at 7. Above the reading 7 indicates the alkaline pH of any of the fluid. And the pH below 7 referred to acidic pH. In drilling mud there are main three chemical components involved which are (OH -) hydroxyl ions, carbonate ions (CO3 -2 ) and alkalinity of drilling mud including (HCO -) bicarbonates ions [8]. The acidity reduced by ions are represented and known as the alkalinity. So for the better measurement of pH, it is observed that pH meter is mostly used rather than the use of a pH paper because of that pH meter measured the accurate figures values. However, it is ensured that the pH meter is calibrated accurately. RESULTS AND DISCUSSIONS 3.1. Characteristics of drilling fluids Achieved drilling fluid samples densities of four samples of both starch's based are 10.5 PPG as shown in Figure 2. Prepared four samples of mud with rheological properties which are obtained in this study based on different filtration controllers are shown in Table 2. Table 2. Bingham plastic model It is discussed that fluids which are non-Newtonian exhibit a relationship between the shear rate and shear stress measured for the formulated samples as shown in Figure 1. According to the graphs which are plotted for four water-based drilling fluid samples representing that the fluid samples of 10.5 ppg are showing almost varying trend except sample no.1 and sample no. 3 which are same at some extent and then variation occurred also between them as shown in Figure 1. A general trend line is drawn for the Bingham plastic fluid and it is observed that the two different filtration control fluid samples exhibit much higher shear stress and a yield point of 35 Ib/100 ft2 is obtained as line cutting the vertical axis as shown in Figure 1. For wheat flour starch-based sample no. 4 and it is also discussed in Table 2 as compare to the pure starch-based sample no. 1 and other samples of wheat-flour starch. pH determination pH of the prepared sample is determined by using pH meter and the obtained results are listed as shown in Figure 3. If a comparison is generated between both starch-based samples which include pure starch and starch from wheat-flour, it is observed that there is a difference between pH of all samples. But it is also observed that the sample no.1 which contains pure starch with amount of 0.40 g is showing a pH value almost similar to the sample no. 3 with amount of 0.35 which is based on wheat-flour starch as shown in Figure. 3. Though, it is also observed that wheat-flour starch is efficient in pH study, because of the reason that sample no.1 contains amount of starch 0.40 g and pH value 9.25, and at the same time sample no.3 contain amount of starch 0.35 g and pH value 9.27. CONCLUSION As this study is related the investigation about the effect and efficiency of starch derived from wheat flour to prepare the samples of water-based drilling fluid and compare their results with the samples which are prepared by using same composition of starch present in market and used in industry. On the basis of laboratory measurements analysis and interpretation, the main concluded and recommended points are listed: − Mud weight of 10.5 lb/gal which is optimum for the X well was selected among the five prepared mud densities, considering that it can sustain the formation pressure. By using another mud weight the formation starts to create fractures in the well by the effect of mud weight density. − Efficiency of starch obtained from wheat-flour is showing increment in rheological properties as compare to starch present in market by using same and varying quantity of both and observed that wheat-flour starch is more efficient as compare to starch in market. − On the other hand, the efficiency of starch is good but it will be also improved by the extraction of starch from wheat-flour by the centrifugation process. − It is also concluded that less the amount of wheat flour starch in drilling fluid then higher will be the pH value.
2020-05-23T03:00:51.318Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "8073b9fd5e4b0d5034516da7c084fd2b410dcfb8", "oa_license": "CCBYSA", "oa_url": "http://ijaas.iaescore.com/index.php/IJAAS/article/download/20269/12898", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "42af240f4979741adca83e1e7cdb721473905a05", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
226704637
pes2o/s2orc
v3-fos-license
Comparative Study on Geopolymer Binders Based on Two Alkaline Solutions (NaOH and KOH) This study specifically investigated the influence of the composition of aluminosilicate material i.e. the substitution of metakaolin by rice husk ash and the nature of alkaline activators (Na + /K + ) on mineralogical, structural, physical and mechanical properties of geopolymer binders. This influence was evaluated based on X-ray diffraction (XRD), Fourier Transform InfraRed spectroscopy (FTIR) and Scanning Electron Microscope (SEM analyses, apparent density, water accessible porosity, compressive strength and thermal properties. Two types of geopolymer binder were synthesized according to the type of alkali activator used, the NaOH-based geopolymer and the KOH-based geopolymer. The results of characterization performed after 14 days of curing of geopolymer samples showed that the activation of the aluminosilicate powder using alkaline solution led to change in their microstructure. The highest compressive strength was obtained with the NaOH-based geopolymer. valorization, only limited to artisanal applications such as pottery and building materials [1]. Additionally, value addition to these materials can potentially be achieved through their geopolymerization yielding geopolymer binder for stabilization of compressed earth bricks (CEBs) [2]. This approach not only improves the performance of CEBs but also limit the environmental damages linked to the production of the commonly used cementitious binders such as cement and lime [3]. The geopolymer, being more environmentally friendly and cost-effective than the cementitious binder [4], would contribute to catering for affordable and decent housing to the needy majority of rural population in Burkina Faso. The geopolymer binder is obtained by geosynthesis process through activation of amorphous aluminosilicate materials with alkaline solution of sodium or potassium hydroxide (Na + /K + ). Previous studies have shown that the highest level of dissolution of alumina (Al 2 O 3 ) and silica (SiO 2 ) precursor is achieved in the presence of Na + rather than K + cations. On the other hand, the degree of gel formation is more important in potassium-based rather than sodium-based geopolymer [5]. Therefore, the nature of the alkaline solution appears to play fundamental role in the geopolymer synthesis and can influence the properties of the final materials. Xu and Deventer [6] claimed that KOH-based geopolymer has higher compressive strength compared with NaOH-based geopolymer. However, Palomo et al. [7] additionally showed that NaOH-based geopolymer can yield higher compressive strength than KOH-based geopolymer depending on the variations in curing temperature and time or alkali activator/aluminosilicate ratio. Duxson et al. [8] concluded that the Si/Al ratio is an important parameter for studying the effect of alkali activators on the compressive strength of the geopolymer. Activation of aluminosilicates can be accelerated or improved by applying moderate heat treatment. Nevertheless, hardening at elevated temperatures (above 100˚C) promotes the appearance of cracks and can negatively affect the properties of the geopolymer binder [9]. Metakaolin and rice husk ash were used as the main source of aluminosilicate material for the synthesis of geopolymer as in previous studies [10] [11]. In the present study, two alkaline solutions (NaOH/KOH) were additionally used as activator to investigate the influence of their nature on the properties of geopolymer binders. This study specifically focuses on the mineralogical, physico-mechanical and thermal properties of the geopolymer binders. It also aims to provide better understanding of the type of alkaline solution to be used for the geopolymerization of compressed earth bricks. Materials Two powders (Figure 1), aluminosilicates and siliceous respectively sourced from metakaolin and rice husk ash, were used as basic materials for the formulation of geopolymer binders. Metakaolin (MK) is obtained by heat treatment of local kaolin (K) at 700˚C. The rice husk ash (RHA) was obtained by 3 hours' mineralization at 550˚C of carbon residue resulting from rice husk gasification. Mineralogical, chemical and physical characterizations of these materials showed their amorphous nature and potential for synthesis of geopolymer binders [10]. The solutions of sodium and potassium hydroxide concentrated at 12 M were used for the activation of these two powders. They were obtained by dissolving pellets of NaOH and KOH (99% purity and provided by COPROCHIM company) in distilled water. Samples Preparation The formulation of the geopolymer binders using NaOH and KOH activators was made according to the description made in the previous study on the characterization of NaOH-based geopolymer binders [10]. The geopolymer paste is obtained by mixing the alkaline solution with powders. Mass ratios (alkaline solution/powder) of 0.7 and 0.8 were used for the formulation of the pastes using KOH and NaOH activators, respectively. The two mass ratios (0.7 and 0.8) were adopted with regard to the difference in the densities of the two alkaline solutions in order to obtain the similar consistency of the paste. Homogenization of the blends was achieved using HOBART blender for 10 minutes. The paste obtained is used to make prismatic test pieces (4 × 4 × 16 cm 3 ). A total of six formulations of geopolymer samples were synthesized: • AN: aluminosilicate powders (100% metakaolin) + NaOH. KOH. The different test pieces were then cured for 14 days, including 7 days at room temperature in the laboratory (30˚C ± 5˚C) and 7 days at 60˚C with an oven. After the curing, they underwent different characterizations (mineralogical, physical, mechanical and thermal) in order to highlight the influence of the nature of the alkaline solution and the substitution rate of metakaolin by the rice husk ash on the geopolymer binders. Physical Mechanical and Thermal Properties The following characterization was performed to evaluate the influence of the nature of the alkaline solution on geopolymer binders:  Water accessible porosity and apparent density (ISO-5017) are determined, in Equations (1) and (2) where: d ρ is the apparent density of the geopolymer samples; w ρ is the density of water; ε is the water accessible porosity of geopolymer samples (%).  The compressive strength was determined by applying Equation (3). The breaking force is determined using hydraulic press (ETI-Proeti) which has load cell capacity of 300 kN, at loading rate of 0.25 kN/s.  Thermal conductivity is the only thermal property measured on geopolymer binder samples. It was measured using hot wire method. This method con- sists in applying a constant power to an electric wire immersed in the cylindrical sample (4 cm diameter considered to be infinite). The desired thermal parameter is then deduced from the mathematical descriptions. Figure 2 shows the diffractograms of the geopolymers based on NaOH (AN, BN and CN) and KOH (AK, BK and CK). It reveals the formation of zeolitic products whose nature and degree of crystallinity depend on the nature of the cation (Na + or K + ) from the alkaline solution used during the synthesis and also on the composition of the aluminosilicates powders. Mineralogical Characterization of Geopolymer Binders by XRD The diffractograms of the NaOH-based geopolymers in Figure 2(a) show the formation of crystalline zeolitic products such as zeolite A, faujasite and hydrosodalite. The formation of these minerals was also influenced by the addition of rice husk ash during geopolymer synthesis. Thus, zeolite A appears only on samples which do not contain rice husk ash (AN) while the crystallinity of faujasite intensifies with the addition of rice husk ash (BN and CN). The hydrosodalite, unlike faujasite, has crystallinity peaks that gradually disappear as function of the addition of rice husk ash. The diffractograms of the KOH-based geopolymers in Figure 2(b) show less crystalline peaks compared to the diffractograms of the NaOH-based geopolymers. In addition to quartz reflection, zeolite F is the only mineral crystallized in these geopolymer samples, showing very low level of intensity. The presence of zeolite F seems to be favored by the addition of the rice husk ash, the characteristic peak of the zeolite F appears more visible on the sample CK (containing 10% of rice husk ash). In addition, halo peaks ranging from 25˚ to 45˚ 2θ are observed on all geopolymer samples. They are more pronounced on samples containing rice husk ash (BN and CN). This reveals the amorphous character within these samples. The halos characteristics of the amorphous phases were previously observed on the diffractograms of metakaolin and rice husk ash from 15˚ to 40˚ and 15˚ to 45˚ 2θ, respectively [10]. They tend to slightly move to higher angles on the diffractograms of geopolymers based on KOH and NaOH: 22˚ to 43˚ and 22˚ to 45˚ 2θ, respectively. This shows the partial dissolution of the amorphous phase from the raw materials and the formation of new amorphous phase within geopolymer materials [12]. The NaOH-based geopolymers presented higher crystallinity than those based on KOH. This could influence the mechanical and physical properties of these geopolymer binders. Mineralogical Characterization of Geopolymer Binders by FTIR The infrared (IR) spectra of geopolymers based on NaOH and KOH is shown on pear only on the spectrum of the sodium compounds so they seemingly mark the presence of zeolites as was previously identified on the XRDs of these compounds. In addition, their intensity decreases with silica content (BN and CN) which is associated with the formation of the geopolymer gel. On the spectra of the KOH-based geopolymers (Figure 3(b)), a single band around 675 cm −1 is identified and is associated with the Si-O bonds of quartz [17]. Contrary to the infrared spectra of NaOH-based geopolymers, the intensity of this band remains the same on the three samples (AK, BK and CK). This is in agreement with the S. O. Sore et al. Thermal Analyses (TG/DTA) Geopolymer Binders The thermal analyzes (TG and DTA) of the various geopolymer compositions are presented in Figure 4. Thermogravimetric analyses recorded the similar mass losses for NaOH-based compounds (only 4.5% difference) for all silica content (Figure 4(a)). For KOH-based compounds, the increase of the silica content increased the loss of mass, recording 16.5% of difference in mass loss ( Figure 4(b)). The thermo-differential analyses recorded endothermic reactions at 170˚C and 380˚C for NaOH-based compounds (Figure 4(a)), reflecting the evaporation of bonded water. Their intensities increase at higher silica contents. The endothermic reaction at 575˚C corresponds to the transformation of quartz (α→β) [18]. The endothermic reactions at higher temperatures (683˚C and 750˚C) correspond to zeolite or geopolymer constituents [19]. The exothermic reactions at 706˚C, 822˚C and 862˚C can be associated with the recrystallizations of zeolites or geopolymers [20]. The endothermic reaction at 890˚C can be related to the decomposition of the formed carbonate as was observed in FTIR of the NaOH-based compounds (Figure 3(a)). For the KOH-based compounds (Figure 4(b)), the dehydration reactions at 170˚C and 402˚C are well marked for sample BK. These compounds record similar endothermic and exothermic reactions as for the NaOH compounds. It is noteworthy that the temperatures at which exothermic reactions took place lowered from sample AK to CK (858˚C → 820˚C and 729˚C → 673˚C). As in the previous case, the reaction at 890˚C can be related to the decomposition of K 2 CO 3 . Figure 5 shows the evolution of water accessible porosity and its relationship with apparent density of geopolymer binders. The water accessible porosity of all samples varies from 34% to 42%, whereas their apparent density varies from 1.2 to 1.3. These properties evolved differently in NaOH-based from KOH-based compounds depending on their compositions. For KOH-based compounds, the correlation between porosity and density is clearly highlighted. The porosity decreases as the density increases which is related to their silica content ( Figure 5(b)). These variations can be explained by the poor crystalline character of these geopolymers, their high sensitivity to the evaporation of water and the contraction of their network. Physical Characterization of Geopolymer Samples The NaOH-based compounds always recorded higher density and lower porosity than their equivalent KOH-based compounds. These differences are essentially related to the crystallized nature of geopolymers consisting mainly of zeolites. Small decrease in density of CN compound can be explained by its well-developed geopolymer phase. To better understand the influence of the type of alkaline solution on the porosity of the geopolymer binders, microscopic observations were performed on geopolymer samples CN and CK. The micrographs taken at two different scales (10 μm and 1 μm) reveal higher porosity for the KOH-based samples compared to those based on NaOH ( Figure 6). This difference in porosity within the two samples is distinctively remarkable on the micrographs at 1 μm scale. This agrees with the results of water accessible porosity shown in Figure 5. Moreover, these micrographs reveal a dense homogeneous structure and some silica grains which J. Minerals and Materials Characterization and Engineering are respectively identified as geopolymer gel and probably quartz from the metakaolin. Mechanical Characterization of Geopolymer Samples The compressive strengths of the NaOH and KOH-based geopolymers are shown in Figure 7. The NaOH-based geopolymers show higher compressive strengths than those of the KOH-based samples. This result can be related to the lower porosity of the NaOH-based geopolymer binders as shown in Figure 5 and their higher crystallinity. Similar behavior of the mechanical properties of geopolymers was previously reported for those based on NaOH compared to those based on KOH [8] [21]. Xu and Deventer [6] argued that better performance in compression of samples based on NaOH against those based on KOH can be attributed to the high degree of dissolution of the aluminosilicates in the presence of the sodium hydroxide solution. According to their study, the small ionic radius of Na + compared to that of K + favors the reaction of ionic pairs with smaller silicate oligomers, thus improving the bond between particles [6]. Thermal Characterization of Geopolymer Samples The thermal conductivity of the different geopolymer formulations is shown in Figure 8. NaOH-based geopolymers have thermal conductivity varying from 0.31 to 0.44 W/m.K, while the values for those based on KOH, which recorded the lowest densities, range from 0.22 to 0.28 W/mK. The thermal conductivity of potassium is known to be approximately half that of sodium [22]. This may have partly contributed in achieving low thermal conductivity of KOH-based geopolymers. Additionally, Figure 8(b) shows the correlation between the thermal conductivity and the water-accessible porosity of the geopolymer samples. Samples with higher porosity tend to have lower density which also may have contributed to reducing their thermal conductivity. Feng et al. [23] indeed reported that the higher the porosity, the lower the thermal conductivity. Moreover, the thermal conductivity is influenced by the addition of the rice husk ash. Figure 8(a) shows that geopolymer binders containing rice husk ash have higher thermal conductivity. This is due to their relatively denser matrix. The densification is associated with higher SiO 2 /Al 2 O 3 ratio resulting in better polycondensation reaction in these geopolymer binders [24]. Conclusions The results obtained in this study highlight the influence of the nature of alkaline solution (NaOH or KOH) used as activator of the aluminosilicate complex (metakaolin with or without rice husk ash) on the various properties of the geopolymer binders. The XRD mineralogical analyses of geopolymer binders mainly show the formation of zeolite minerals (especially on NaOH-based samples) and the presence of amorphous phases (more pronounced on KOH-based samples). The presence of these zeolite products has been confirmed by FTIR. The type of alkali solution used for activation of the aluminosilicate complex also significantly affects the physico-mechanical and thermal properties of geopolymer samples. NaOH-based samples yield the highest compressive strength, the lowest water-accessible porosity, and the highest apparent density. KOH-based geopolymer binders recorded the least thermal conductivity, which can be related to their higher porosity. The addition of rice husk ash also improved the physical and mechanical properties but increased the thermal conductivity of geopolymer samples. Although the thermal conductivity of NaOH-based materials is higher than that of KOH based materials, it still remains lower than that of cementitious materials. Given the results presented in this study and considering higher cost of the potassium hydroxide (approximately twice the cost of the sodium hydroxide), sodium hydroxide turns out to be more appropriate for the geopolymerization of compressed earth bricks.
2020-10-28T19:18:29.151Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "ec22554933fe76cfddbb4024353dfd81f3e0862c", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=103563", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2b03fa8765865b97505e914857ffd6f2e8a9b628", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
1454305
pes2o/s2orc
v3-fos-license
Towards a More Natural Multilingual Controlled Language Interface to OWL The paper presents an ongoing research that aims at OWL ontology authoring and verbalization using a deterministic controlled natural language (CNL) that would be as natural and intuitive as possible. Moreover, we focus on a multilingual CNL interface to OWL by considering both highly analytical and highly synthetic languages (namely, English and Latvian). We propose a flexible two-level translation approach that is enabled by the Grammatical Framework and that has allowed us to develop a more natural, but still predictable multilingual CNL on top of the widely used Attempto Controlled English (its subset for OWL, ACE-OWL). This has also allowed us to exploit the readily available ACE parser and verbalizer not only for the modified and extended version of ACE-OWL, but also for the corresponding controlled Latvian. Introduction Several notations are widely used to make the formal OWL ontologies more intelligible for both domain experts and knowledge engineers. They can be divided in several groups: graphical notations, like UML and its profiles (Barzdins et al., 2010), controlled natural languages (CNL), like Attempto Controlled English or ACE (Kaljurand and Fuchs, 2007), and human-readable formal syntaxes, like the Manchester OWL Syntax (Horridge et al., 2006). The latter kind of notation explicitly follows the underlying formalism and therefore requires substantial training to obtain acceptable reading and writing skills. CNL, in contrast, provides the most informal and intuitive means for knowledge representation and has been successfully used in ontology authoring, where involvement of domain experts is crucial (Dimitrova et al., 2008). Graphical notations are in between and provide a complementary view, unveiling the high-level structure of the ontology in a more comprehensible way. In this paper we focus on untrained domain experts and end-users, and, thus, on CNL that has to be as natural and grammatical as possible. Moreover, we focus on multilingual ontology verbalization to facilitate ontology localization and reuse. Note that CNL has to ensure deterministic interpretation of its statements, and bidirectional mapping to OWL, so that the CNL user could easily predict or grasp the precise meaning of the specification that is being written or read, and so that the roundtrip from OWL to CNL and back would not introduce any semantic changes in the ontology (if the user has not made changes in the verbalization). In addition to the highly restricted syntactic subset of full natural language, this is typically achieved by a small set of interpretation rules and a monosemous (domain-specific) lexicon. The state of the art CNLs for OWL (Schwitter et al., 2008) are based on English -a highly analytical language (strict word order, simple morphology, systematic use of determiners) that facilitates the rather straightforward translation of CNL sentences into their semantic representation (axioms in description logic). Regardless of the chosen notation, English is often used also as a meta-language for naming the logical symbols (class and property names) at the ontology level. Angelov and Ranta (2010) have recently shown that the Grammatical Framework (GF), a formalism and a resource grammar library that provide means for developing parallel grammars, is a convenient framework for rapid implementation of multilingual CNLs. Such seamless cross-translation capability allows easy reuse of the tools developed for existing CNLs -in this way we will reuse the ACE to OWL and OWL to ACE translators. However, in the case of highly synthetic languages (like Slavic and Baltic) that have rich morphology and relatively free word order, the bidirectional translation to English (i.e., ACE or some other CNL) is not straightforward, especially if we are dealing with statements that represent not only axioms 1 but also rules. For rules (such as SWRL), anaphoric noun phrases (NP) are frequently used: in English they are marked by the definite article, while in Baltic and in most of the Slavic languages such markers are generally not explicitly used and are not encoded even in noun endings. Thus, one of the central problems during the semantically precise translation is how to distinguish between axioms and rules, and how to convey, which information is new (potential antecedents) and which is already given (anaphors). In this paper we primarily consider Latvian -a member of the Baltic language group. In Section 2 we briefly describe its design and coverage. In Section 3 we illustrate the proposed two-level approach that is used to translate controlled Latvian to (and from) OWL via ACE as an interlingua 2 . We show that this approach allows also for flexible and independent development of an extended and/or modified (adjusted) controlled English interface at the end-user side, if compared to ACE, especially its subset for OWL (ACE-OWL). We conclude the paper with a brief discussion on the current results and future tasks. Grammar The information structure of a sentence indicates what we are talking about (the topic) and what we are saying about it (the focus) (Hajicova, 2008). In (controlled) English, changes in the information structure typically are reflected by the use of different syntactic constructions, for instance, by using the passive voice instead of the active voice. In Latvian, this is typically reflected by a different word order, for instance, by changing a subject-verb-object (SVO) sentence into OVS or SOV sentence. Thus, in languages like Latvian the word order is syntactically (rather) free, but semantically bound. Although the topic and focus parts of a sentence, in general, are not reflected by systematic (deterministic) changes in the word order, it has been shown (Gruzitis, 2010) that, in the case of controlled Latvian, the information structure of a sentence can be systematically and reliably conveyed by relying on simplified analysis of the topic-focus articulation (TFA), i.e., on simple word order patterns: if the object comes after the verb (the neutral word order) it belongs to the focus part of the sentence (new information), but if it precedes the verb -to the topic part (given information). As the initial evaluation shows (Gruzitis et al., 2010), the "correct" word order is both intuitively satisfiable by a native speaker and enables the automatic detection of anaphoric NPs in controlled Baltic languages (Latvian and Lithuanian). The simplified TFA method can be adjusted also to controlled Slavic languages. It should be noted that in Latvian it could be theoretically possible to impose the mandatory use of artificial determiners, by using, for example, indefinite and demonstrative pronouns, however, such "articles" would be unnatural in most cases. Lack of articles is even more apparent in Lithuanian, which, in contrast to Latvian, has no historic influence from the comparatively analytical German. The survey by Gruzitis et al. (2010) confirmed other important aspects as well that should be addressed, in order to make controlled Latvian more natural and intuitive: • Due to the rich morphology, there are various alternatives and certain reductions possible in the syntactic realization of a sentence, while preserving both the information structure and the abstract syntax tree (in terms of GF), e.g., making of complex attributes instead of relative clauses may lead to more concise and intelligible sentences 3 . • Explicit determiners ("articles") in certain cases are preferred: an indefinite pronoun ("a") improves the reading of a singular SVO sentence, if the object is not restricted by a relative clause, but a demonstrative pronoun ("the") helps in complex rule statements (in addition to the word order). • Sentences in the plural are often preferred over their counterparts in the singular. • Limitations of the OWL expressivity (SVO triples only, no time dimension etc.) to some extent can be lessened on the surface level of the CNL (while preserving the deterministic interpretation), e.g., by using (where appropriate) non-SVO constructions, like adverbial modifiers of place instead of direct objects, and nouns (roles) instead of verbs (actions), and by using the present perfect tense instead of the simple tense (to express a past event that has present consequences). Therefore, in addition to a grammar that generates the best possible (default) verbalization patterns (taking into account the information structure), we have developed a parallel grammar that allows for completely optional use of determiners and accepts the various syntactic alternatives and extensions 4 . We have also developed a parallel prototype grammar for controlled English that is based on the full ACE 5 with some improvements: we have extended support for the present perfect tense (e.g., by allowing phrases like "has done something"), and we have taken a pattern from the Sydney OWL Syntax (Schwitter et al., 2008) to provide an alternative way for expressing inverse nominalized properties (e.g., "everything has something as a part" instead of "everything has-part something" or "for everything its part is something"). It should be mentioned that in the highly inflective controlled Latvian both direct and inverse nominalized properties are verbalized in a more flexible and uniform way. To achieve a full compliance with the Latvian counterpart, the controlled English grammar has to be further extended with respect to non-SVO sentences (clauses): although adverbial modifiers of place (prepositional constructions) are allowed in the full ACE (e.g., "someone lives in something"), there is no support for inverse use of a property in such cases, i.e., it is neither allowed to start a relative clause with the relative pronoun "where", nor to change the fixed word order (like in "something is a place where someone lives in"). Again, in controlled Latvian the support for the various relative clauses is ensured in a uniform way. Implementation The possible steps of our approach that can be performed during the roundtrip from CNL to OWL and vice versa are illustrated in Figure 1. LavDefSg is a grammar that defines the default verbalization patterns using Latvian singular sentences, LavDefPl is its counterpart for plural sentences, and LavVar is an extended combination of both, extensively allowing for free variations (at both the syntactic and lexical level). LavVar is used for robust, still predictable parsing (in the ontology authoring direction), while one of the default grammars (depending on the choice of the end-user) -for paraphrasing LavVar sentences and for verbalizing existing ontologies. EngDef implements the ACE-based English grammar, and EngVar provides few lexical and syntactic alternatives. Finally, AceOwl implements the chosen interlingua, i.e., accepts/generates sentences that are generated/accepted by the ACE-OWL verbalizer/parser. All these grammars are implemented in GF and are related by a common abstract syntax. Note that translation (reduction) to/from AceOwl is an internal step of which the end-user is not aware. Existing tools are exploited for the transition to/from OWL, using ACE-OWL as an interchange format (covered by the AceOwl grammar). Other transitions are ensured by the parallel GF grammars. 1 Everything that eats something is an animal. Tas, kas kaut ko d, ir dz vnieks. 2 Every carnivore is an animal that eats an animal. Every animal that eats an animal is a carnivore. Ikviens pl s js ir dz vnieks, kas d k du dz vnieku. Ikviens dz vnieks, kas d k du dz vnieku ir pl s js. 3 Every herbivore is an animal that eats nothing but things that are a plant or that are a part of nothing but plants. 4 Every giraffe is a herbivore. Ikviena žirafe ir z l d js. 5 Everything that is eaten by a giraffe is a leaf. Tas, ko d k da žirafe, ir lapa. 6 Everything that has a leaf as a part is a branch. Tas, kura da a ir k da lapa, ir zars. 7 Every tasty plant is a nourishment of a carnivore. Ikviens garš gs augs ir k da pl s ja bar ba. 8 No animal is a plant. Neviens dz vnieks nav augs. 9 If X eats Y then Y is a nourishment of X. Ja X-s d Y-u, tad Y-s ir X-a bar ba. For a demonstration we use a sample African wildlife ontology that is verbalized in Table 1. During the translation from Table 1 to ACE-OWL (Table 2), all non-SVO statements are reduced to artificial SVO statements (e.g., "lives in something" to "lives-in something", "part of something" to "part-of something"), and all terms are normalized into fixed forms that are conveyed as is to the ontology 6 . The result, in general, is ungrammatical (from the linguistic perspective), but we do not try to make it more grammatical where possible (e.g., the past participle form could be used in the 5th statement) -we use it only as a technical interchange format that normally is not visible to the end-user. However, it is a good illustration that explicitly unveils the nature and limitations of OWL. Note that certain conversions are done at the end-user level (while paraphrasing from Var to Def) and are further reflected in OWL. For instance, the present perfect tense can be converted to the simple tense (e.g., "has done something" to "does something") or vice versa, if such alternatives are listed in the domain lexicon (individually for each language and property). grammar), or verbalized from the original OWL ontology (by the ACE verbalizer). The prefixes that indicate the POS categories, although accepted by the ACE parser, are used here only for the sake of clarity. The semantic interpretation is acquired by the ACE parser and is given in parallel (in the Manchester notation). 1 Everything that v:eats something is an n:animal. ObjectProperty: eats Domain: animal 2 Every n:carnivore is an n:animal that v:eats an n:animal. Every n:animal that v:eats an n:animal is a n:carnivore. Class: carnivore EquivalentTo: animal and (eats some animal) 3 Every n:herbivore is an n:animal that v:eats nothing but things that are a n:plant or that v:part-of nothing but n:plant. Class: herbivore SubClassOf: animal and (eats only (plant or (part-of only plant))) 4 Every n:giraffe is a n:herbivore. Class: giraffe SubClassOf: herbivore 5 Everything that is v:eats by a n:giraffe is a n:leaf. Class: inverse (eats) some giraffe SubClassOf: leaf 6 Everything that is v:part-of by a n:leaf is a n:branch. Class: inverse (part-of) some leaf SubClassOf: branch 7 Every n:tasty-plant v:nourishment-of a n:carnivore. Class: tasty-plant SubClassOf: nourishment-of some carnivore 8 No n:animal is a n:plant. Class: animal DisjointWith: plant 9 If X v:eats Y then Y v:nourishment-of X. Discussion The two-level translation approach has allowed us to develop a rather sophisticated multilingual CNL on top of the rather restricted ACE-OWL (in terms of naturalness). Of course, ACE-OWL itself can be developed to be equally natural, but the benefit of our approach is that it allows for more flexible, rapid 7 and independent extensions and adjustments to what users consider the most natural verbalization. The proposed approach enables not only a multilingual, but also a multi-dialect interface to OWL: different CNLs can be mixed together or used in parallel, and the interlingua can be relatively easily changed. It should be reminded that our goal is to ensure a predictable interpretation, therefore we could change the interlingua to CPL-Lite, for instance, but not to CPL, which is non-deterministic (Clark et al., 2010). Also note that GF not only enables the precise cross-grammar translation 8 , but also facilitates the application of more flexible and linguistically less restrictive naming conventions at the OWL level. One might ask why we use an interlingua at all, rather than proceed by translation to and from OWL directly in GF (by providing yet another concrete grammar for the Functional-Style Syntax or some other formal notation of OWL). Indeed, verbalization of existing ontologies could be done in this way, but a problem arises in the reverse direction -form CNL to OWL: the current implementation of GF does not provide support for dealing with anaphors 9 . Thus, by solving the interpretation issues via an interlingua, we get the ontology verbalization functionality for free. One might also argue that the dependence on a handcrafted domain lexicon is a significant disadvantage. This is the price for flexibility, multilinguality, naturalness and precision. Although it would be possible to generate the English lexicon from a linguistically motivated ontology, the problem is how to acquire the precise translation equivalents. In the case of ontology authoring, common word lexicons could be reused, but, again, the alignment issue arises and specific multi-word units are often used. In this paper we have considered only terminological (TBox) axioms and rules. It would be interesting to see to what extent the deterministic TFA method can be adjusted for assertional (ABox) statements. However, for populating an ontology with facts (individuals), some other kind of an interface (e.g., GUI forms or tables) could be more appropriate.
2014-07-01T00:00:00.000Z
2011-01-12T00:00:00.000
{ "year": 2011, "sha1": "f3d596f3f4cb5772bd4f128e957906d7c6a12a76", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "f3d596f3f4cb5772bd4f128e957906d7c6a12a76", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
250264739
pes2o/s2orc
v3-fos-license
Using Contextual Sentence Analysis Models to Recognize ESG Concepts This paper summarizes the joint participation of the Trading Central Labs and the L3i laboratory of the University of La Rochelle on both sub-tasks of the Shared Task FinSim-4 evaluation campaign. The first sub-task aims to enrich the ‘Fortia ESG taxonomy’ with new lexicon entries while the second one aims to classify sentences to either ‘sustainable’ or ‘unsustainable’ with respect to ESG (Environment, Social and Governance) related factors. For the first sub-task, we proposed a model based on pre-trained Sentence-BERT models to project sentences and concepts in a common space in order to better represent ESG concepts. The official task results show that our system yields a significant performance improvement compared to the baseline and outperforms all other submissions on the first sub-task. For the second sub-task, we combine the RoBERTa model with a feed-forward multi-layer perceptron in order to extract the context of sentences and classify them. Our model achieved high accuracy scores (over 92%) and was ranked among the top 5 systems. Introduction Financial markets and investors can support the transition to a more sustainable economy by promoting investments in companies complying to ESG (Environment, Social and Governance) rules.Today there is growing interest among investors in the performances of firms in terms of sustainability.Therefore, the automatic identification and extraction of relevant information regarding companies' strategy in terms of ESG is important.The use of NLP (Natural Language Processing) methods adapted to the field of finance and ESG could help identify and process related information. Taxonomies are important NLP resources, especially for semantic analysis tasks and similarity measures [Vijaymeena and Kavitha, 2016;Bordea et al., 2016].In this context, the FinSim4-ESG Shared Task proposed the tasks of enrichment of ESG taxonomy and sentences classification.FinSim-4 is the fourth edition of a set of evaluation campaigns that aggregate efforts on text-based needs for the Financial domain [Maarouf et al., 2020;Mansar et al., 2021;Kang et al., 2021].This latest edition is particularly challenging due to the continuously evolving nature of terminology in the domain-specific language of the ESG which leads to a poor generalization of pre-trained word and sentence embeddings. Several studies addressed the problem of taxonomy generation for different domains [Shen et al., 2020a;Karamanolakis et al., 2020].Deep learning based embedding networks, such as BERT [Devlin et al., 2018] have proven to be efficient for many NLP tasks.Malaviya et al. [2020] used BERT for knowledge base completion and showed that BERT performs well for this task.Liu et al. [2020] used BERT to complete an ontology by inserting a new concept with the right relation.Kalyan and Sangeetha [2021] used sentence BERT [Reimers and Gurevych, 2019] to measure semantic relatedness in biomedical concepts and showed that sentence BERT outperforms corresponding BERT models.Shen et al. [2020b] used sentence BERT to build a knowledge graph for the biomedical domain and showed that it obtains the best results. For the Shared Task FinSim-4, we proposed several strategies based on BERT language models.For the first sub-task, we proposed a model based on pre-trained Sentence-BERT models to project sentences and concepts in a common space in order to better represent ESG concepts.For the second subtask, we combined the RoBERTa model with a feed-forward multi-layer perceptron to extract the context of sentences and classify them.Official results of our participation show the effectiveness of our models over the Shared Task FinSim-4 benchmark.In terms of accuracy, our best runs respectively ranked 1 st and 4 th for the sub-tasks 1 and 2 with scores 0.848 and 0.927, respectively. The remainder of this paper is organized as follows.In Section 2, we present the shared task FinSim-4 and the datasets for both sub-tasks.Our proposed models are detailed in Section 3. The setup and official results are described in Section 4. Finally, Section 5 concludes this paper. Shared Task FinSim-4 The FinSim 2022 shared task aims to spark interest from communities in NLP, ML/AI, Knowledge Engineering and Financial document processing.Going beyond the mere representation of words is a key step to industrial applications that make use of natural language processing.The 2022 edition proposes two sub-tasks. Sub-task 1: ESG taxonomy extension The first sub-task aims to extend the 'Fortia ESG taxonomy' provided by the organizers.This taxonomy was built based on different financial data providers' taxonomies as well as several sustainability and annual reports.It has twenty five different ESG concepts that belong to the ESG, split as: environment, social or governance.The organizers provide a training set which consists of terms belonging to each concept.This training set is unbalanced as one can observe in Table 1 where one can find the number of terms for each concept in the train set. Participants were asked to complete this taxonomy to cover the rest of the terms of the original 'Fortia ESG taxonomy'.For example, given a set of terms related to the concept 'Waste management' (e.g.Hazardous Waste, Waste Reduction Initiatives), participating systems had to automatically assign to it all other adequate terms. Sub-task 2: Sustainability classification The second sub-task aims to automatically classify sentences into sustainable or unsustainable sentences.A sentence is considered as sustainable if it semantically mentions the Environmental or Social or Governance related factors as defined in the Fortia ESG taxonomy.3 Proposed strategies Sub-task 1: ESG taxonomy extension Semantic text similarity is an important task in natural language processing applications such as information retrieval, classification, extraction, question answering and plagiarism detection.This task consists in measuring the degree of similarity between two texts and to determine whether how semantically close they are (from completely independent to fully equivalent).In our case, the terms of a same concept are considered semantically equivalent.Siamese models have been shown to be effective on the semantic analysis of sentences [Linhares Pontes et al., 2018;Reimers and Gurevych, 2019]. Our model is based on Sentence-BERT (SBERT) [Reimers and Gurevych, 2019], a modification of the pre-trained BERT network that uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity (Figure 1).This model is trained on a parallel dataset where two paraphrases or similar semantic sentences have high cosine similarity. We consider all terms about a concept as paraphrases because they share the same semantic information.For instance, the terms 'carbon footprint' and 'carbon data' should have similar sentence representation because they share the same concept 'carbon factor'; meanwhile, the terms 'Water Risk Assessment' and 'Transition to a circular economy' do not share the same concept and, consequently, their representations should have different sentence representation. With the SBERT model, we project all terms on the same dimensional space and then, we train our logistic regression Sub-task 2: Sustainability classification For this sub-task, we combine a BERT-based language model [Liu et al., 2019] with a feed-forward multi-layer perceptron to extract the context of sentences and classify them into 'sustainable' or 'unsustainable'.The architecture of our model is described in Figure 2. We took the representation of the [CLS] token at the last layer of these models and we added a feed-forward layer to classify a input sentence as 'sustainable' or 'unsustainable'. Evaluation metrics All runs were ranked based on mean rank and accuracy for the first sub-task and only accuracy for the second sub-task.The mean rank is the average of the ranks for all observations within each sample. Accuracy determines how close the candidates' predictions are to their true labels: where ŷi is the predicted value of the i-th sample and y i is the corresponding true value. Experimental evaluation In order to select the best pre-trained models for each subtask, we split the training datasets into 70% training and 30% for development.BERT model Accuracy distilbert-base-uncased 0.906 bert-base-uncased 0.921 roberta-base 0.922 Table 5: Results of our approach (Section 3.2) using different BERTbased language models for the second sub-task. For the first sub-task, we selected the sentence BERT models: 'bert-base-nli-mean-tokens'2 , 'all-roberta-large-v1'3 , and 'paraphrase-mpnet-base-v2'4 .The first and second pretrained SBERT models are based on the well-know BERTbased language models (BERT and RoBERTa language models, respectively).The third pre-trained model was trained on the paraphrase dataset where two paraphrases have close representation.Table 4 shows the results for each pre-trained model.The 'paraphrase-mpnet-base-v2' achieved the best results for both metrics.We assume that the analysis of paraphrases is similar to the analysis of terms that share the same concept, which allowed this model to outperform the other models. For the second sub-task, we selected the BERT language models: DistilBERT [Sanh et al., 2019], BERT, and RoBERTa.RoBERTa (Robustly Optimized BERT Pretraining Approach) is an extension of BERT with changes to the pre-training procedure [Liu et al., 2019].They trained their model with bigger batches and over more data with long sentences.They also removed the next sentence prediction objective and dynamically changed the masking pattern applied to the training data.In this case, the RoBERTa language model outperformed the other models (Table 5). Official results We submitted two runs for ESG taxonomy extension.The first run used the approach described in Section 3.1 to train our model on the training data (Fortia ESG taxonomy).For the second run, we extended the Fortia ESG taxonomy with our in-house ESG taxonomy5 and we used the same procedure to train the model.Our ESG taxonomy consists of a total of 65 terms spread across 22 concepts.For both runs, we used the pre-trained SBERT model 'paraphrase-mpnet-base-v2'. Official results for the first sub-task are listed in Table 6.Both of our runs achieved the best results for mean rank and accuracy.In fact, our siamese model provided a better semantic representation of terms and outperformed the other approaches.The extension of the training data with our taxonomy enabled our model to better analyze the context of terms and their corresponding concepts and, consequently, improved the accuracy of 0.014 points.We also submitted two runs for the second sub-task.The first run follows the same idea described in Section 3.1 to represent the sentences by using SBERT.Then, the logistic regression classifies these sentence representations into only two classes: 'sustainable' and 'unsustainable'.The second run uses the deep-learning model described in Section 3.2.Our model uses the pre-trained RoBERTa language model and two feed-forward layers to classify a sentence into 'sustainable' or 'unsustainable'. Official results for the second sub-task are listed in Table 7.Our runs achieved the fourth best result.The combination of fine-tuned RoBERTa language model and feed-forward layers outperformed both baselines as well as our run with SBERT and logistic regression.Our models performed well (over 92% accuracy) and was ranked among the top 5 systems (0.19 points below the best-performing system). Conclusion This paper described the joint effort of the L3i laboratory of the University of La Rochelle and the Trading Central Labs in the Shared Task FinSim-4 evaluation campaign for the task of ESG in financial documents.For this task, we developed BERT-based models.Our model based on siamese sentence analysis achieved the best results for the first sub-task.For the second sub-task, our approach based on the RoBERTa model got the fourth position. Figure 1 : Figure 1: Sentence transformer architecture at inference to compute semantic similarity scores between two sentences. Table 2 summarizes the training data provided by the organizers. Table 2 : Dataset description for the sustainability sub-task. Table 3 : Details of the split of the 'Fortia ESG taxonomy' dataset to set our meta-parameters. model 1 to analyze and classify them to their corresponding concept classes. Table 4 : Table3shows the number of examples in the resulting training and development split for our analysis.Results of our approach (Section 3.1) using different SBERT models for the first sub-task. Table 6 : Official results for the first sub-task.Our approaches are listed at the bottom of the table.The best results are in bold.Our model ours wo extended data was trained on the original training data provided by the organizers and the version ours with extended data was trained on the original data set combined with our taxonomy. Table 7 : Official results for the second sub-task.Our approaches are listed at the bottom of the table.The best results are in bold.
2022-07-05T01:16:18.025Z
2022-07-04T00:00:00.000
{ "year": 2022, "sha1": "e5cd54993b13a1ffa19cd56bd9085c004294ae9c", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2022.finnlp-1.29.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "e5cd54993b13a1ffa19cd56bd9085c004294ae9c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Economics" ] }
3106171
pes2o/s2orc
v3-fos-license
Wnt/β-Catenin signaling reduces Bacillus Calmette-Guerin-induced macrophage necrosis through a ROS -mediated PARP/AIF-dependent pathway Background Necrosis of alveolar macrophages following Mycobacterium tuberculosis infection has been demonstrated to play a vital role in the pathogenesis of tuberculosis. Our previous study demonstrated that Wnt/β-catenin signaling was able to promote mycobacteria-infected cell apoptosis by a caspase-dependent pathway. However, the functionality of this signaling in the necrosis of macrophage following mycobacterial infection remains largely unknown. Methods Murine macrophage RAW264.7 cells were infected with Bacillus Calmette-Guerin (BCG) in the presence of Wnt/β-catenin signaling. The necrotic cell death was determined by cytometric assay and electronic microscopy; the productions of reactive oxygen species (ROS) and reduced glutathione (GSH) were measured by a cytometric analysis and an enzyme-linked immunosorbent assay, respectively; and the activity of poly (ADP-ribose) polymerase 1 (PARP-1)/apoptosis inhibition factor (AIF) signaling was examined by an immunoblotting assay. Results The BCG can induce RAW264.7 macrophage cells necrosis in a dose- and time-dependent manner along with an accumulation of reactive oxygen species (ROS). Intriguingly, an enhancement of Wnt/β-catenin signaling shows an ability to reduce the mycobacteria-induced macrophage necrosis. Mechanistically, the activation of Wnt/β-catenin signaling is capable of inhibiting the necrotic cell death in BCG-infected RAW264.7 cells through a mechanism by which the Wnt signaling scavenges intracellular ROS accumulation and increases cellular GSH concentration. In addition, immunoblotting analysis further reveals that Wnt/β-catenin signaling is capable of inhibiting the ROS-mediated cell necrosis in part through a PARP-1/AIF- dependent pathway. Conclusions An activation of Wnt/β-catenin signaling can inhibit BCG-induced macrophage necrosis by increasing the production of GSH and scavenging ROS in part through a mechanism of repression of PARP-1/AIF signaling pathway. This finding may thus provide an insight into the underlying mechanism of alveolar macrophage cell death in response to mycobacterial infection. Background Mycobacterium tuberculosis (Mtb) is the cause of human tuberculosis (TB), which is regarded as one of the most harmful pathogens that is responsible for more deaths than any other microorganism. To date, one third of the population in the world has immunological evidence of Mtb infection [1]. TB is characterized by the presence of caseous necrotic lesions in the lungs, in which caseous necrotic lesions are mainly composed of cellular corpses that result from necrotic death in macrophages infected by Mtb [2]. Thus, necrotic death has been suggested to play a central role in the pathogenesis of TB, an inhibition of Mtb-infected cell necrosis is vital to the pathogenesis of TB disease. It has been demonstrated that the necrotic cell death, is associated with an energy independent and disordered cell death, which allows the release of viable mycobacteria for subsequent re-infection. Although several lines of recent studies suggested that necrosis could also follow a strictly programmed and ordered series of events [3,4], the precise mechanism underlying the necrosis of Mtb-infected host cells remains largely unknown. A necrotic cell can be morphologically characterized by vacuolation of the cytoplasm, breakdown of the plasma membrane and an induction of inflammation around the dying cell attributable to the release of cellular contents and pro-inflammatory molecules. The necrosis of cells can be triggered mainly by cellular 'accidents' such as toxic insults, physical damage or reactive oxygen species (ROS) [5]. In this regard, ROS can act as an important mediator of cell death, and has strongly implicated in the aforementioned detrimental response by host that results in selfinjury [6,7]. However, the molecular mechanisms underlying ROS-mediated cell death currently have not been fully demonstrated. There are several studies suggested that ROS was involved in the necrosis of many cell types [8,9]. For instances, Zhang et al. uncovered a role of receptor-interacting protein 3 (RIP3) in cell apoptosis/necrosis induced by tumor-necrosis factor (TNF)-α switching, by which cell necrosis could occur partly through an increasing energy metabolism-associated ROS production [10]. Such a ROS-mediated cell necrosis was also found in human hepatocellular carcinoma SK-Hep1 cells treated with β-lapachone, where β-lapachone could induce cell necrosis through an activation of ROS mediated RIP1 /poly ADP-ribose polymerase 1 (PARP-1)/apoptosis inhibition factor (AIF) signaling pathway [6]. However, recent studies demonstrated that the TNF-induced necrosis and PARP-1-mediated necrosis represented distinct routes to programmed necrotic cell death [11,12], suggesting a cell context-dependent and/or insult-dependent cell necrosis pathway. The canonical (Wnt/β-catenin) pathway, have been evidenced to be involved in the interaction of Mtb and macrophage [13,14], and alveolar epithelial cells [15]. An increasing number of studies has demonstrated a regulatory role of Wnt signaling in cell apoptosis or cell death [16,17]. Our previous study also demonstrated that an activation of Wnt/β-catenin signaling was able to promote apoptosis of macrophage RAW264 cells infected with Bacillus Calmette-Guerin (BCG) [14]. However, the mechanism underpinning the modulatory role of Wnt/ β-catenin signaling in cell death, in particular of necrosis of immune cells in response to various pathogen infections remains largely elusive. With this in mind, we thus interrogated the impact of the activation of Wnt/β-catenin signaling in the cell necrosis of macrophages in response to BCG infection using a murine macrophage RAW264.7 cell line. Cell lines and Wnt3a conditioned medium Murine macrophage RAW264.7 cell line was purchased from shanghai Institute of Biochemistry and Cell Biology (Shanghai, China); the Wnt3a producing cell line, L Wnt3a (overexpressing mouse Wnt3a, ATCC #CRL-2647) and its control L cell line (ATCC #ATCC #CRL-2648) were purchased from American Type Culture Collection (ATCC) (Masassas, VA, USA). The cells were cultured and maintained at 37°C in a humidified atmosphere of 5% CO 2 and 95% air in DMEM medium (Invitrogen, Grand Island, NY, USA) supplemented with 10% Fetal Bovine Serum (FBS) and 1% pen/strep. The Wnt3a and control L cells were grown to confluence prior to be refreshed with DMEM/2% FBS and kept for 12 h. The culture media were collected and used for preparation of Wnt3aconditioned medium (Wnt3a-CM) and control medium (control-CM), respectively. Since transformed cell lines were used in vitro in this study, informed consent was not required. There was not an ethnic concern either. Infection of RAW264.7 macrophage cells with BCG Mycobacterium bovis BCG, Beijing strain (Center for Disease Control and Prevention (CCDC), Beijing, China) was grown at 37°C humidified incubator with shaking in Middle-brook 7H9 broth (BD Diagnostic Systems, Sparks, MD, USA) containing 10% albumin dextrose catalase supplement (Difco, West Molesey, Surrey, UK) for 2 weeks. Cultures were then harvested by centrifugation at 500 × g for 10 min and re-suspended in the medium. The bacilli were then titrated as previously described [18] prior to be aliquoted and stored at -80°C freezer. The above control or transfected RAW264.7 cells were infected with BCG at a multiplicity of infection (MOI) of 10 and incubated at 37°C in a 5% CO 2 , humidified air atmosphere for additional 6 h prior to be harvested for analysis. Flow cytometry analysis for cell necrosis Cells were treated with different conditions for 2 h prior to be infected with BCG for additional 6 or 36 h before they were collected and stained with Annexin V and PI using an Apoptosis and Necrosis Detection Kit I (BD Pharmigen, San Jose, CA, USA) for flow cytometric analysis. The flow cytometry assay was performed on a BD FACSCanto II, and data was analyzed with FlowJo 8.8.6 software (Tree Star Inc, Ashland, OR, USA). All experiments were performed with biological triplicates and data are representative of at least three independent experiments. Electronic microscopy The cells cultured under different conditions were first observed under an inverted microscope before being harvested for electronic microscopy analysis. For scanning electron microscopy (SEM) analysis, the cells were fixed with 2.5% glutaraldehyde, stained with 1.25% osmium tetroxide in PBS, dehydrated, and sputter coated prior to visualization on a Hitachi S-450 microscope (Tokyo, Japan); for transmission electron microscopy (TEM) analysis, the cells were fixed and stained as SEM, followed by infiltration with Spurr resin following dehydration. 80 nm serial sections were then viewed on a Hitachi H-7650 Electron Microscope (Tokyo, Japan). The apoptosis of cells was determined according the morphological criteria described in a previous study [19]. Reduced glutathione (GSH) assay RAW264.7 cells were treated with Control-CM, Wnt3a-CM, BCG, DKK1, LPS or H 2 O 2 alone, or in combination. After 6 h incubation, the cellular reduced glutathione was quantified using GSH Assay kit (Jiancheng Institute of Biotechnology, Nanjing, China) per manufacturer's protocol. The reduced GSH were normalized by protein concentrations. All experiments were performed with biological triplicates and data are representative of at least three independent experiments. Flow cytometric analysis of intracellular ROS Cells were loaded with 5-(and-6)-chloromethyl-2-,7dichlorofluorescin diacetate (DCHF-DA) for intracellular ROS measurement by accessing the intramitochondrial O 2 as described previously [20]. Briefly, the cells were harvested and washed with 1 × PBS, followed by incubation with 5 mmol/L DCHF-DA in dark for at 37°C for 15 min. The cells were then washed in 1 × PBS and resuspended in plain DMEM for flow cytometry assay. The flow cytometric analysis was performed on a BD FACSCanto II. At least 20,000 events were analyzed. All experiments were performed with biological triplicates and data are representative of at least three independent experiments. NAD + analysis 5 × 10 4 cells/well were seeded in a 96-well and culture overnight, before they are treated with different conditions. The total intracellular NAD + was measured using the EnzyChrom NAD Assay Kit according to the protocol provided by manufacturer (E2ND-100, Bioassay Systems, Hayward, California). Immnoblotting analysis Whole cell extract were prepared by homogenizing the cells in a lysis buffer (50 mM Tris-HCl, pH 7.5, 5 mM EDTA, 150 mM NaCl, 0.5% NP-40) for 60 min on ice. The lysates were then centrifuged at 10,000 × g for 10 min at 4°C, and the supernatants were collected as whole-cell extracts. The soluble protein concentration was measured with Bio-Rad Protein Assay (Bio-Rad Laboratories, Richmond, CA) using bovine serum albumin (BSA) as a standard. The cell extracts (50 μg) were separated by 10% sodium dodecyl sulfate (SDS)-polyacrylamide gel (SDS-PAGE) and transferred to a PVDF membrane (Millipore, Billerica, MA, USA). The membrane was blocked in 4% fat free dry milk in PBS containing 0.2% Tween-20 and probed using antibody against PARP-1, cleaved PARP-1 and AIF and β-actin followed by appropriate peroxidase labeled secondary antibodies. The blots were then developed using the enhanced chemiluminescence (ECL) reagent (Amersham Biosciences, Piscataway, NJ, USA). All above antibodies were from Cell Signaling Technology (Beverly, MA, USA). Statistical analysis All data collected in this study was obtained from at least three independent experiments for each condition. SPSS18.0 analysis software was used for the statistic analysis. Statistical evaluation of the data was performed by one-way ANOVA and t-test for comparison of differences between the two groups. A value p < 0.05 set to represent a statistical difference and a value p < 0.01 set to represent a statistically significant difference. Data was presented as the mean ± standard deviations (SD). BCG induced-RAW264.7 cell necrosis can be inhibited by an activation of Wnt/β-catenin signaling In order to evaluate the necrosis of macrophages in response to mycobacterial infection, the murine alveolar RAW264.7 macrophage cells were infected with BCG at different dosages for varied time points. Results of flow cytometric analysis revealed a dose-and time-dependent reduction of cell viability and increased necrotic cell fraction following the BCG infection, indicating that BCG was able to induce RAW264.7 macrophage necrosis in a time-and dose-dependent manner ( Figure 1A and B). Intriguingly, the BCG-induced cell necrosis could be significantly reduced when the cells were exposed to Wnt 3a, a ligand of the Wnt/β-catenin signaling ( Figure 1C). On the contrary, cells enforced expression of Wnt signaling antagonist DKK1 exhibited an opposite effect of Wnt3a in RAW264.7 cells, where the introduction of DKK1 displayed a capacity to promote BCG induced-cell necrosis ( Figure 1D). The function of Wnt/β-catenin signaling in inhibition of BCGinduced macrophage necrosis was further morphologically confirmed by accessing the characteristics necrotic cells using scanning electronic microscopy (SEM) and transmission electronic microscopy (TEM) (Figure 2A). The EM images of RAW264.7 cells revealed that the majority of cells exposed to control-CM showed healthy morphology, characterized with an integrity of nuclear membrane with abundant surrounding microvilli, uniform cytoplasm with rare cytoplasmic vacuoles, well-organized organelles, nuclei with clear membrane bounder, and uniform speckled distributed chromatin (Figure 2A and data not shown). While increasing numbers of necrotic cells was observed in cells exposed to BCG, which were characterized with loss of microvilli, disappearance of plasma membrane integrity and the presence of cellular organelles (Figure 2A and data not shown). Importantly, quantitative analysis demonstrated that the addition of Wnt3a-CM significantly could inhibit the BCG-infected cells to necrotic cell death, in comparison with the control-CM treated cells when the necrotic cell numbers were determined by an EM morphology (p < 0.01) ( Figure 2B). These morphological results provided further evidence that activation of Wnt/β-catenin may inhibit necrosis in mycobacteria-infected macrophages. Wnt/β-catenin signaling suppresses the production of ROS in BCG-infected RAW264.7 cells The importance of the balance between ROS production and scavenging is underscored by observations that oxidative stress can be either protective or damaging in several diseases [8]. With this in mind, we next examined whether the infection of BCG was able to induce RAW264.7 macrophages to produce ROS that subsequently induced cell necrosis. As expected, a dose-and time-dependent ROS production was determined in the RAW264.7 cells when they were infected with BCG at MOI of 20 or less for up to 6 h ( Figure 3A and B). To verify whether BCG induces macrophages necrosis by ROS, a ROS scavenger NAC was employed to determine the impact of ROS on BCGinduced cell necrosis. As shown in Figure 4A, the addition of NAC (10 mmol/L) could significantly inhibit the BCG-induced ROS content, along with a decreased necrosis rate (p < 0.01) ( Figure 4B,C). Of note, the increased level of ROS was correlated with the necrotic death of RAW264.7 cells in response to BCG infection. Therefore, we next sought to explore whether an activation of Wnt/β-catenin signaling had an impact on the production of ROS. Noteworthy, a remarkable reduction of intracellular ROS production was observed in BCGinfected RAW264.7 cells that exposed to Wnt3a-CM ( Figure 3C). The Wnt3a-mediated ROS scavenging activity was further confirmed by addition of H 2 O 2 into RAW264.7 cell cultures at final concentration of 500 μmol/L, in which the Wnt3a-CM showed a capacity to dramatically reduce the ROS level ( Figure 3D). These data imply that an activation of Wnt/β-catenin signaling can repress the BCG-induced cell necrosis through a mechanism of reducing the accumulation of intracellular ROS. Wnt/β-catenin signaling induces the production of Glutathione (GSH) in RAW264.7 cells Next, we attempted to understand a possible underlying mechanism of Wnt/β-catenin signaling to scavenge the cellular ROS. Most recently, several studies demonstrated that the cellular ROS could be eliminated through detoxification mechanisms provided by endogenous antioxidant enzymes and antioxidants such as glutathione (GSH) [12,21]. This study reminded us to examine the GSH concentration in RAW264.7 cells treated with various conditions. The results exhibited that the presence of Wnt3a-CM could significantly increase the production of GSH in both of naïve RAW264.7 macrophages and the BCG-infected cells; in contrast, cells transfected with Wnt signaling antagonist DKK1 displayed an opposite function seen in the Wnt3a-CM treated cells, i.e. the expression of DKK1 could revise the Wnt3a-induced GSH productions ( Figure 5A). To further confirm the capacity of Wnt3a to induce intracellular GSH production upon an external insults, the intracellular GSH levels of RAW264.7 cells treated with H 2 O 2 , lipopolysaccharide (LPS), H 2 O 2 /Wnt3a or LPS/Wnt3a were measured. Indeed, the addition of Wnt3a-CM could markedly increase the GSH levels in RAW264.7 cells treated with H 2 O 2 or LPS ( Figure 5B). These results stability [22]. A ROS-induced DNA damage could activate PARP-1; an over activated PARP-1 could rapidly utilize substrate NAD + to transfer poly ADP-ribose (PAR) to itself and to nuclear acceptor proteins, subsequently the host cell consumes its ATP pools to resynthesize NAD + , which resulted in the host cell energy crisis and cell death [22,23]. Moreover, recently study suggested an involvement of PARP-1 in the ROSinduced cell necrotic death [6]. With this in mind, we thus interrogated the impact of Wnt/β-catenin signaling on PARP-1 of BCG-infected RAW264.7 cells by an immunoblotting analysis. The results showed an increased abundance of PARP-1 and its downstream signaling protein AIF, as well as the cleaved form PARP-1 protein in BCG-infected cells ( Figure 6A). Of note, the addition of Wnt3a-CM could suppress the BCG-induced PARP-1 and AIF protein expression, and inhibit the PARP-1 activity by reducing the cleaved form of PARP-1 protein in RAW264.7 cells ( Figure 6A). Equal importantly, the pharmacological inhibitor of PARP-1, 3-AB also showed an activity to impair PARP-1 activation by inhibiting PARP-1 cleavage and AIF expression ( Figure 6A). To Figure 3 Impacts of BCG and/or Wnt/β-catenin signaling on ROS production in RAW264.7 cells. (A) RAW264.7 cells were infected with BCG at MOI of 10 for indicated time, then they were used for intracellular ROS measurement by a flow cytometry assay. A time-dependent ROS production was observed within 6 h. (B) RAW264.7 cells were infected with indicated doses of BCG for 6 h prior to be used for examination of intracellular ROS by a flow cytometry assay. A BCG dose-dependent ROS production was observed. (C) Impact of Wnt/β-catenin signaling on BCGinduced ROS production in RAW264.7 cells. RAW264.7 cells were infected with BCG at MOI of 10 for 6 h prior to be used for measuring intracellular ROS by a flow cytometry assay. An activation of Wnt/β-catenin signaling exhibited an ability to reduce BCG-induced ROS production. (D) Impact of Wnt/β-catenin signaling on oxidative stress in RAW264.7 cells. RAW264.7 cells were exposed to 500 μmol/L of H 2 O 2 for 6 h before they were harvested for intracellular ROS measurement. The addition of Wnt3a showed a capacity to scavenge oxidative stressed-ROS accumulation. verify whether Wnt3a has an impact on the cellular NAD + level, a marker of PARP-dependent cell necrosis, the cellular NAD + content was detected. As shown in Figure 6B, Wnt3a could inhibit the depletion of the BCG-induced cellular NAD+. On the contrary, an introduced expression of Wnt/β-catenin pathway inhibitor DKK1 could significantly reversed the effect of Wnt3a ( Figure 6B). In order to confirm whether the PARP-1 pathway was involved in the Wnt signaling-modulated necrosis in RAW 264.7 cells infected with BCG, the frequencies of necrotic death of RAW264.7 cells treated with BCG, Wnt3a, 3-AB alone or a combination were ascertained by a flow cytometric analysis ( Figure 6C and D). A time-dependent cell necrosis was found in RAW264.7 cells infected with BCG, more abundant necrotic cell death was observed at 36 h post BCG infection as compared with that at 6 h ( Figure 6C). Notably, the addition of Wnt3a-CM or PARP-1 inhibitor 3-AB alone could significantly reduce the BCG-induced cell necrosis (p < 0.01) ( Figure 6C and D. Interestingly, the addition of Wnt3a-CM into the 3-AB treated RAW264.7 cells failed to further inhibit BCG-induced necrosis ( Figure 6C and D). These results suggest that Wnt3a can inhibit necrosis of BCGinfected macrophage cells through the ROS-mediated PARP1/AIF signaling pathway. Discussion A large body of study has demonstrated that Wnt signaling is capable of governing cell survival, proliferation, differentiation and apoptosis through multiple intracellular signaling pathways [24]. In this regard, the canonical Wnt signaling possesses a property of regulation of cell proliferation and apoptosis in a cell-context dependent manner, by which the activated Wnt/β-catenin signaling was able to enhance cell proliferation but also induce apoptosis in a variety of cells [16,17]. Our previous study also revealed that an activation of Wnt/β-catenin signaling could promote apoptosis for BCG-infected macrophage, in part through a caspase-dependent apoptosis pathway [14]. Along similar lines, in this study, we found that an activation of canonical Wnt signaling could inhibit the BCGinduced macrophage necrosis, at least in part through the ROS-mediated PARP1-AIF signaling pathway. With regard to cell death induced by an external insult, oxidative stress and ROS are known to implicate in a number of physiological and pathological processes, leading to various biological consequences including necrosis [25]. In this context, free oxygen radicals are highly toxic to pathogens and are utilized as a tool for host cells to prevent colonization of tissues by microorganisms [7]. Therefore, ROS production has been recognized as one of the earliest innate immune responses of host cells in response to a microbial invasion. Despite the mode of ROS production may be cell context and insultdependent, previous studies suggest that ROS can act as a common signal triggering cell death through the activation of MAPK and/or JNK pathway [9]. In agreement with these findings, in the present study, we found a dose-and time-dependent ROS production, along with an increased numbers of necrotic cell death in RAW264.7 cells in response to BCG infection. Equal importantly, an activation of canonical signaling by addition of Wnt3a-CM exhibited a decreased ROS production and cell necrosis in these cells, regardless of BCG infection. Such inhibitory role of Wnt/β-catenin in ROS production and necrosis was tightly correlated with suppressed expressions of PARP-1 and its downstream mediator AIF, and an inhibited PARP-1 activity. Such a PARP-1 signaling pathway induced necrosis was also found in human hepatocellular carcinoma SK-Hep1 cells [6] and HT-22 cells [26]. Most recently, Jiang et al. found that Wnt signaling pathway was involved in preventing steroid-induced osteonecrosis of the femoral head (steroid-induced ONFH) by suppressing PPARγ expression [27]. Conversely, a combination of Wnt signaling inhibitor DKK1 and hypoxia could cause a necrotic osteocytic cell death; and blocking of Dkk-1 was able to protect bone cells from glucocorticoid and hypoxia-induced cell injury [28]. Together with our findings, these studies suggest a cell context-dependent modulatory role of Wnt signaling in cell necrosis. PARP-1 is a zinc finger protein that belongs to a family of 18 identified genes that transcribe poly (ADP-ribose) polymerases, enzymes that catalyze the covalent transfer of poly-ADP units from NAD + to acceptor proteins. It is known as a key effector capable of amplifying necrotic signals in a necrotic pathway by which PARP-1 contributes to DNA base excision repair and the maintenance of genomic stability [22,29]. In addition, an increasing number of studies revealed that AIF was a mediator for cell necrosis. AIF could be translocated from the mitochondria to the cytosol and nucleus where it bound with DNA and RNA to induce caspase-independent chromatinolysis in the nucleus [30,31]. As GSH is an important intracellular scavenger for ROS, and the ratio between oxidized glutathione and reduced glutathione (GSSG/ GSH ratio) has been used as an important index of the redox balance in the cell and consequently of cellular oxidative stress, an increased GSH level may be able to attenuate an oxidative stress [32]. Indeed, in the current study, an inverse correlation of GSH concentration and intracellular ROS level was observed in RAW264.7 cells treated with different conditions. Intriguingly, an activation of Wnt signaling could increase the concentration of intracellular GSH, subsequently reduced the ROS accumulation in cells infected with BCG. It is worthy to note that the infection of BCG did not lead a reduction of cellular GSH content in RAW264.7 cells, suggesting that the increased GSH content induced by Wnt3a might not the solo mechanism for elimination of BCG-induced ROS, despite the Wnt-induced intracellular GSH showed an ability to eliminate the BCG-induced ROS in this study. In addition, the process of ROS generation was complicated, for example, a BCG-up-regulated antimicrobial peptide cathelicidin LL-37could lead a rapid ROS production in human epithelial cells [33]. Moreover, NAD depletion is an essential event in the sequence leading from PARP-1 activation to cell death, in which NAD is required for maintaining PARP functions [34]. Limitations Certain limitations to our findings must be considered. Particularly an avirulent mycobacterial strain BCG and a murine macrophage cell line were used in this study. In future studies, primary macrophages and virulent mycobacteria should be employed to investigate the impacts of Wnt/β-catenin signaling in macrophages in response to mycobacterial infection and verify our current findings. Conclusions The results reported in this study demonstrate that a ROS mediated PARP-1/AIF pathway is involved in the BCGinduced necrosis of alveolar macrophage RAW264.7 cells. Interestingly, an activation of Wnt/β-catenin signaling can inhibit the BCG-induced necrosis in part through the PARP-1/AIF pathway, by which Wnt/β-catenin signaling down-regulates expressions of PARP-1 and AIF, and induces the production of GSH that scavenges intracellular ROS accumulation. We thus uncovered a novel underlying mechanism of the Wnt/β-catenin signaling in necrotic death of macrophage in response to a mycrobacterial infection. Of note, other necrosis pathways may also be involved in the Wnt-mediated inhibition of cell necrosis, which need to be defined in future study. Further study on the regulation of Wnt/β-catenin signaling on macrophage necrosis induced by Mtb will be helpful for better understanding immune regulation and for developing new preventive and therapeutic strategies for TB.
2016-05-12T22:15:10.714Z
2015-03-18T00:00:00.000
{ "year": 2015, "sha1": "00ac85fcf81b86e3639275bccbe8e33e165f2df1", "oa_license": "CCBY", "oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/s12865-015-0080-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a23de73c32357c17482d36dff9717251d2fe800c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244933307
pes2o/s2orc
v3-fos-license
The Cyclobutanocucurbit[5–8]uril Family: Electronegative Cavities in Contrast to Classical Cucurbituril while the Electropositive Outer Surface Acts as a Crystal Packing Driver The structural parameters for the cyclobutanoQ[5–8] family were determined through single crystal X-ray diffraction. It was found that the electropositive cyclobutano methylene protons (CH2) are important in forming interlinking crystal packing arrangements driven by the dipole–dipole interactions between these protons and the portal carbonyl O of a near neighbor. This type of interaction was observed across the whole family. Electrostatic potential maps also confirmed the electropositive nature of the cyclobutano CH2 but, more importantly, it was established that the cavities are electronegative in contrast to classical Q[5–8], which are near neutral. A bank of fully substituted derivatives of Q[n] have slowly emerged in the literature between 1992-2017, where the equatorial regions have been decorated with methyl (Me), hydroxyl (OH) groups, or the fused rings cyclohexano (CyH), cyclopentano (CyP), and cyclobutano (CyB) (Figure 1, R n Q[n]) [25][26][27][28][29][30]. The alkyl-substituted examples are all synthesized by the H + /cat. condensation and oligomerization of an appropriately substituted glycoluril with HCHO or from its diether alone. However, bearing Me or CyH substituents only favors the formation of the smaller homologues n = 5 (major) and 6 [25][26][27]. Significantly, the contracted rings, especially CyB, enable the formation of higher homologues n = 7 and 8 [29,30]. Just prior to our reporting of the synthesis of the fully substituted CyB 8 Q [8] the partially substituted Me 4 Q [8] and CyH 2 Q [8] were the only higher homologues (n = 8) carrying substitution that are prepared by condensation and cyclo-oligomerization ( Figure 1, bottom R x Q[n]) [31]. The motivation for examining the contracted ring substituents CyP and then CyB related to a hypothesis that the dihedral angle, β°, of the fused imidazolidinone rings at the concave face was an important contributor to the formation of higher homologues. This was especially relevant to the synthesis of fully substituted higher homologues, which were otherwise unavailable. As a follow-up to the previous work, we included theoretical calculations for the angle β° for each of the precursor glycoluril diethers (R = Me, CyH, CyP, CyB, Figure 1). Using this theory, the angle for each was not only available to compare to measured values as a verification but also to provide calculated values, which would otherwise not be available. Herein, we report a repeat of the initial synthesis of CyB5-8Q [5][6][7][8] with the objective of obtaining crystal structures for each of the homologues [30]. This not only provides support for the original findings but also enables the collection of important structural data to better understand possible influences on their physical and chemical properties. Particularly relevant is the diameter of the cavities and the portal openings in comparison to homologues of classical Q[n] and/or different substitutions. In solution, it was previously observed that equatorial substitution has an effect upon the binding affinities of various molecular guests, where affinities can increase or decrease relative to the same guest molecule [20][21][22][29][30][31][32]. In the case of partially substituted derivatives such as Me4Q [6] and Me4Q [8] (Figure 1, bottom), a distortion in the cavity can partly explain a change in the binding affinity, which is also dependent upon the guest's shape [20][21][22]31], whereas, with full substitution, the Q cavities are spheroidal and effects upon binding affinities in solutions can best be explained by electronic changes, the degree of Q structure rigidity, and variations in diameters of the portals and cavities. Two significant examples that demonstrate this were previously reported for comparative binding affinities for classical Q [6] and CyP6Q [6] (Figure 1) for the guest ions cyclohexylammonium and octane-1,8-diammonium, respectively, ~120-and 8-fold higher in the latter host [20]. The explanation for the increase is primarily related to an increase in electron density on the carbonyl O contributed by the equatorial alky substituents [21]. In addition to the physical dimensions, electronic surface effects in the solid state were reported by Tao and coworkers for classical Q[n] crystal packing. They found a strong interactive correlation between the electropositive outer surface of Qs with a The motivation for examining the contracted ring substituents CyP and then CyB related to a hypothesis that the dihedral angle, β • , of the fused imidazolidinone rings at the concave face was an important contributor to the formation of higher homologues. This was especially relevant to the synthesis of fully substituted higher homologues, which were otherwise unavailable. As a follow-up to the previous work, we included theoretical calculations for the angle β • for each of the precursor glycoluril diethers (R = Me, CyH, CyP, CyB, Figure 1). Using this theory, the angle for each was not only available to compare to measured values as a verification but also to provide calculated values, which would otherwise not be available. Herein, we report a repeat of the initial synthesis of CyB 5-8 Q [5][6][7][8] with the objective of obtaining crystal structures for each of the homologues [30]. This not only provides support for the original findings but also enables the collection of important structural data to better understand possible influences on their physical and chemical properties. Particularly relevant is the diameter of the cavities and the portal openings in comparison to homologues of classical Q[n] and/or different substitutions. In solution, it was previously observed that equatorial substitution has an effect upon the binding affinities of various molecular guests, where affinities can increase or decrease relative to the same guest molecule [20][21][22][29][30][31][32]. In the case of partially substituted derivatives such as Me 4 Q [6] and Me 4 Q[8] (Figure 1, bottom), a distortion in the cavity can partly explain a change in the binding affinity, which is also dependent upon the guest's shape [20][21][22]31], whereas, with full substitution, the Q cavities are spheroidal and effects upon binding affinities in solutions can best be explained by electronic changes, the degree of Q structure rigidity, and variations in diameters of the portals and cavities. Two significant examples that demonstrate this were previously reported for comparative binding affinities for classical Q [6] and CyP 6 Q[6] (Figure 1) for the guest ions cyclohexylammonium and octane-1,8-diammonium, respectively,~120-and 8-fold higher in the latter host [20]. The explanation for the increase is primarily related to an increase in electron density on the carbonyl O contributed by the equatorial alky substituents [21]. In addition to the physical dimensions, electronic surface effects in the solid state were reported by Tao and coworkers for classical Q[n] crystal packing. They found a strong interactive correlation between the electropositive outer surface of Qs with a neighboring Q electronegative portal and/or anions, in particular, the anion [ZnCl 4 ] 2− or similar large anions [3]. The objectives in this study were to determine the physical and electronic similarities or differences between CyB 5-8 Q [5][6][7][8] and classical Q[n]. Results and Discussion We were fortunate in being able to obtain single crystals of all four homologues with suitable quality for X-ray diffraction. The CyB 5 Q [5] was obtained in the first crop of crystals from acidic water (~0.05 M) as CyB 5 The first notable feature of the family of CyB 5-8 Q [5][6][7][8] was the similarity in their dimensions to the classical Q [5][6][7][8] family. A comparison to a selection of reported guest free classical Q [5][6][7][8] showed that the average dimensions of the portals, cavities, and depths were indiscernible from those of CyB 5-8 Q [5][6][7][8] (Table 1). Another important feature found with the CyB 5-8 Q [5][6][7][8] family in their crystal packing arrangement is the interaction of the outer equatorial surface of CyB n Q[n] and electronegative portal O and/or anions. This was found with the Cl − anion in the case of the crystal of In this context, we calculated the ESP maps for each of the CyB n Q[n] homologues. Compared to similar electrostatic potential maps (ESP) for the classical Q [5][6][7][8] there were two distinct differences [3b]. The outer equatorial surface was less positive by 10-12 kcal mol −1 compared to ESPs for the classical Q [5][6][7][8], however, clearly sufficiently positive, favoring interactions with the electronegative C=O in the crystal packing of 1, 2, 3, and 4, with additional anion interactions specific to 2 and 4 [3a]. Of far greater significance was the inner cavity surface potential at the widest point. Here the ESP was found to be >−12.5 kcal mol −1 more negative, whereas classical Q[n] are near neutral. This was more obvious as the cavities increased in diameter from CyB 5 Q [5] to CyB 8 Q [8] where the latter had the most negative cavity surface. Previous reported decreases in binding affinity of guests cannot, therefore, be explained based on larger portals or spacious cavities; however, the electronic differences are a likely a factor [30]. Highlighted Structural Features for 1, 2, 3, and 4 and Their Outer Surface Interactions The solid-state structure of CyB 5 Q[5]·7H 2 O (1) was found to be a relatively simple set of stacked cages forming columns side by side as a continuous corrugated sheet ( Figure S1). The portals are superimposed upon the CyB 5 Q [5] below but cages of each column are out of register with cages in the adjacent column (A and B, B and C, Figure 2), creating a tube of cavities. Previous reported decreases in binding affinity of guests cannot, therefore, be explained based on larger portals or spacious cavities; however, the electronic differences are a likely a factor [30]. Highlighted Structural Features for 1, 2, 3, and 4 and Their Outer Surface Interactions The solid-state structure of CyB5Q [5]·7H2O (1) was found to be a relatively simple set of stacked cages forming columns side by side as a continuous corrugated sheet ( Figure S1). The portals are superimposed upon the CyB5Q [5] below but cages of each column are out of register with cages in the adjacent column (A and B, B and C, Figure 2), creating a tube of cavities. The asymmetric unit structure CyB5Q [5]·7H2O contains a water molecule in each portal (O1W and O2W) but none in the cavity ( Figure S2a). The remaining 5H2O completes a H-bonding network that helps to link each CyB5Q [5] cage ( Figure S2b and Table S1). However, the most significant driving force in the crystal packing arises through dipole-dipole interaction between the portal C=O and the cyclobutano CH2 protons. The lacing together of columns A, B, and C is affected with 16 associated interactions, as shown for set of four CyB5Q [5] (Figure 2). The specific proton connections are O1-H4A and H3B, O4-H27B and H28A, O7-H4B, O8-H3A, O9-H28B, and O10-H27A (distances 2.56, 2.54, 2.59, 2.51, 2.38, 2.81, 2.84, and 2.37 Å, respectively). Hence, there are significant dipole-dipole interactions between the electronegative C=O and electropositive cyclobutano CH2. The primary driving force for the crystal packing of CyB 6 Q [6] in the crystal CyB 6 Q[6]· 2Cl − ·2(H 3 O + )·14H 2 O (2) obtain from dilute HCl was also strongly influenced by the outer surface interaction of the partially positive CH 2 protons of the cyclobutano substituent. This is reflected in the short distances between O6-H7B, O4-H23A, and Cl1-H7A (2.44, 2.75, and 2.74 Å, respectively). Slightly longer interactions between Cl1-H22A and H15 (2.94 and 3.00 Å) were also found. The portals O4 and O6 are directly connected to H23A and H7B of their cyclobutano nearest CyB 6 Q[6] neighbor ( Figure 3). Molecules 2021, 26, x FOR PEER REVIEW 5 of 11 The third largest homologue CyB7Q [7] was concentrated through chromatography and crystallized from dilute HCl solutions with very slow evaporation to afford CyB7Q [7]·12H2O (3) (Figure 4). Interestingly, no Cl − ions were retained in these structures. The CyB7Q [7] are superimposed in closely stacked arrangements of columns, with significant quantities of water molecules congregated in line with and in close proximity to their portals. The interstitial portal spaces between each CyB7Q [7] had four water molecules sandwiched in these locations. One space was occupied by O3W, O4W, and O5W with O4W duplicated and O6W, O7W, and O8W occupying the opposite space with O7W duplicated ( Figure S4). Surprisingly, no water molecules were found in the cavity, only just inside the portal. The remaining water O1W and O2W duplicated and sat toward the edges of two shared portals. The prime dipole driving force for close-packed column formation appeared to be the connections between the C=Os and the protons of cyclobutano CH2 of the closest CyB7Q [7] in a neighboring column (O1-H3B and O7-H17A, 2.72 and 2.94 Å, respectively). (Figure 4). Interestingly, no Cl − ions were retained in these structures. The CyB 7 Q [7] are superimposed in closely stacked arrangements of columns, with significant quantities of water molecules congregated in line with and in close proximity to their portals. The interstitial portal spaces between each CyB 7 Q [7] had four water molecules sandwiched in these locations. One space was occupied by O3W, O4W, and O5W with O4W duplicated and O6W, O7W, and O8W occupying the opposite space with O7W duplicated ( Figure S4). Surprisingly, no water molecules were found in the cavity, only just inside the portal. The remaining water O1W and O2W duplicated and sat toward the edges of two shared portals. The prime dipole driving force for close-packed column formation appeared to be the connections between the C=Os and the protons of cyclobutano CH 2 of the closest CyB 7 Q [7] in a neighboring column (O1-H3B and O7-H17A, 2.72 and 2.94 Å, respectively). The third largest homologue CyB7Q [7] was concentrated through chromatography and crystallized from dilute HCl solutions with very slow evaporation to afford CyB7Q [7]·12H2O (3) (Figure 4). Interestingly, no Cl − ions were retained in these structures. The CyB7Q [7] are superimposed in closely stacked arrangements of columns, with significant quantities of water molecules congregated in line with and in close proximity to their portals. The interstitial portal spaces between each CyB7Q [7] had four water molecules sandwiched in these locations. One space was occupied by O3W, O4W, and O5W with O4W duplicated and O6W, O7W, and O8W occupying the opposite space with O7W duplicated ( Figure S4). Surprisingly, no water molecules were found in the cavity, only just inside the portal. The remaining water O1W and O2W duplicated and sat toward the edges of two shared portals. The prime dipole driving force for close-packed column formation appeared to be the connections between the C=Os and the protons of cyclobutano CH2 of the closest CyB7Q [7] in a neighboring column (O1-H3B and O7-H17A, 2.72 and 2.94 Å, respectively). [8] was the beginning to the next stacked layer. The porosity of the solid-state structure was obvious with the omission of H 2 O. In addition, the portals and cavities were not obstructed by anions (Figure 5a). H 2 O was found H-bonded to the C=O just inside the portals, close to the Cl of the anion and between the outer surfaces of the CyB 8 Q[8] ( Figure S5). The primary driving force for packing appeared to be the ion-dipole interaction between electropositive protons on the outer surface and the anion. However, close portal C=O to cyclopentano CH 2 interactions were found between O5-H60A, O13-H5B, and a slightly longer interaction O15-H4A, 2.74, 2.64, and 2.98 Å, respectively. The largest homologue CyB8Q [8] was successfully crystallized with the assistance of a chlorozincate anion, which resulted in the crystals CyB8Q [8]·(ZnCl3·H2O) − ·H3O + ·10H2O (4). The CyB8Q [8] cages were found to be knitted together by the [ZnCl3H2O] − anion into an apparent honeycomb structure (Figure 5a). Multiple close associations with the [ZnCl3H2O] − anion and the electropositive cyclobutano protons with contact distances of ~ 2.8 Å created a cluster of 3xCyB8Q [8]s around a single anion (Figure 5b). The second [ZnCl3H2O] − anion shown and the fourth CyB8Q [8] was the beginning to the next stacked layer. The porosity of the solid-state structure was obvious with the omission of H2O. In addition, the portals and cavities were not obstructed by anions (Figure 5a). H2O was found H-bonded to the C=O just inside the portals, close to the Cl of the anion and between the outer surfaces of the CyB8Q [8] ( Figure S5). The primary driving force for packing appeared to be the ion-dipole interaction between electropositive protons on the outer surface and the anion. However, close portal C=O to cyclopentano CH2 interactions were found between O5-H60A, O13-H5B, and a slightly longer interaction O15-H4A, 2.74, 2.64, and 2.98 Å, respectively. Collectively, each homologue was found to have significant outer surface interactions between the electropositive cyclobutano CH2 and the electronegative portal C=O and/or the anions Cl − or [ZnCl3H2O] − that contribute to the crystal packing of each structure. The directionality of the CH2 relative to the cavities favors the formation of nearparallel cavity columns, which contrasts with the classical Q[n]. The equatorial protons of classical Q[n] protrude at 90° relative to the cavity axis and, therefore, allow direct portal interaction with a near neighbor, leading to orientation of cavities perpendicular to each other. The electropositive outer surface of the family of CyB5-8Q [5][6][7][8] was supported by ESP calculations (Figure 6). However, compared to the classical Q [5][6][7][8], two distinct Collectively, each homologue was found to have significant outer surface interactions between the electropositive cyclobutano CH 2 and the electronegative portal C=O and/or the anions Cl − or [ZnCl 3 H 2 O] − that contribute to the crystal packing of each structure. The directionality of the CH 2 relative to the cavities favors the formation of near-parallel cavity columns, which contrasts with the classical Q[n]. The equatorial protons of classical Q[n] protrude at 90 • relative to the cavity axis and, therefore, allow direct portal interaction with a near neighbor, leading to orientation of cavities perpendicular to each other. The electropositive outer surface of the family of CyB 5-8 Q [5][6][7][8] was supported by ESP calculations (Figure 6). However, compared to the classical Q [5][6][7][8], two distinct differences were found, which are highlighted in Table 2 [4]. The first was that the outer equatorial surface was less positive by 10-12 kcal mol −1 compared to ESPs for the classical Q [5][6][7][8] but clearly sufficiently positive to favor interactions with the electronegative C=O in the crystal packing of 1, 2, 3, and 4 with additional anion interactions specific to 2 and 4 [3a]. The second, but more important, difference is at the inner cavity surface at the widest point. Here the ESP > −12.5 kcal mol −1 was more negative, whereas classical Q[n] were near to neutral. The visible color change of the ESP map of the cavity surfaces from yellow to nearly all red was obvious as the cavities increased in diameter, with CyB 8 Q [8] being the most negative ( Figure 6). in the crystal packing of 1, 2, 3, and 4 with additional anion interactions specific to 2 and 4 [3a]. The second, but more important, difference is at the inner cavity surface at the widest point. Here the ESP > −12.5 kcal mol −1 was more negative, whereas classical Q[n] were near to neutral. The visible color change of the ESP map of the cavity surfaces from yellow to nearly all red was obvious as the cavities increased in diameter, with CyB8Q [8] being the most negative ( Figure 6). Figure 6. Electrostatic potential maps (ESPs) for CyB5Q [5], CyB6Q [6], CyB7Q [7], and CyB8Q [8], respectively. A feature not discussed so far that could impact the size of the cavities of CyB5-8Q [8] is the β° angle of the cis-fused imidazolidinone rings of the glycoluril moieties. As an average this was found to be ~ 0.8° wider than classical Q [5][6][7][8]. As a poignant comparison, the six smaller homologues of Me10-12Q [5][6], classical Q [5][6], and CyB5-6Q Table 3). This difference occurred as a function of the substituent carried at the cis-fused junction of the glycoluril moiety. In addition to ESP theory, we also calculated the dihedral β • of the precursor glycoluril diethers R = Me, CyH, CyP, and CyB and where R = H, the latter an unknown compound ( Figure 1). The interest here was 2-fold: to determine through calculation the angle β • for the diethers including R = H, which has no measurable data available, and, secondly, to compare theoretical values with those previously measured. A good fit of theory to measure would also support future applications as a predictive tool to identify suitable glycoluril candidates for the synthesis of higher homologues. The theoretical values compared well for the three cyclo-substituted examples and R = Me, as shown in Table 4. However, R = CyB and Me were slightly over estimated. Table 4. A comparison of calculated and measured dihedral angles β • for the different substituted glycoluril diethers shown in Figure 1. The calculated trend toward a wider angle from CyH to CyB (6-4 membered ring substituents) and a consistency with the measured angle is encouraging and potentially could be applied to future theoretical glycoluril diethers and ultimately to the synthesis of newly substituted Q[n] families. The measured trend is also consistent in Q[n] derivatives, although the angles are wider due the involvement of eight-membered rings joining neighboring glycoluril moieties as opposed to six-membered rings for the glycoluril diethers. Materials and Methods Starting materials were purchased from commercial suppliers and used without further purification. CyB 5-8 Q [5][6][7][8] were prepared in accordance with the literature method [30]. NMR spectra were identical to those previously reported. The crude mixture of CyB n Q[n] was obtained as a solid (2.76 g), which also contained LiCl (0.1 g) as part of the reaction process. Distilled water (50 mL) was added, and the mixture was heated to dissolve the bulk of the material. The undissolved material (1.15 g) was collected by filtration and the filtrate was set aside at RT, over a period of days, which resulted in successive quantities of precipitate, also collected by filtration (0.81 g). The undissolved material and the collected precipitates were combined. The total of the collected solids (1.96 g) were then completely dissolved in distilled water (30 mL) following the addition of HCl 32% (8-10 drops). After 30 days at RT, crystals of 1 were obtained (0.15 g). The filtrate was then set aside at RT for 40 days, which yielded crystals of 2 (0.08 g). Purification of CyB The filtrate above, obtained from the solution remaining after the collection of the cocrystallized CyB 5-6 Q [5][6], was evaporated to dryness and the residue (0.85 g) was subjected to silica gel column chromatography eluting with a mixture of HCO 2 H/AcOH/EtOH (1:5:0.1). After the early fractions were clear of remaining CyB 5-6 Q[5-6] (as determined by tlc), the eluant was changed to HCO 2 H/AcOH (1:2). Fractions of predominantly CyB 7 Q [7] were combined and the solvent was evaporated in vacuo. Modification of the eluant ratio to 1:1 gave CyB 8 Q[8]-rich fractions, which were also combined, and the solvent was evaporated. The solid residue from the CyB 7 Q[7]-rich fractions were dissolved in diluted HCl 0.03 M (2 mL) and set aside for~50 days with slow evaporation to obtain single crystals of 3. The residue from the CyB 8 Q[8]-rich fractions were also dissolved in diluted HCl 0.03 M (2 mL) with added ZnCl 2 (0.3 g). After a similar time period with slow evaporation, single crystals of 4 were obtained. X-ray Crystallography X-ray crystal data for complexes 1-4 were collected on a Rigaku Oxford Diffraction Supernova Dual Source (Oxford Diffraction Ltd., Abingdon, England) Cu at zero equipped with an AtlasS2 CCD using Mo-Kα radiation. Lorentz polarization and absorption corrections were applied. Structural solutions and full-matrix least-squares refinements based on F 2 were performed using the SHELXT-14 and SHELXL-14 program packages, respectively. All non-hydrogen atoms were refined anisotropic thermal parameters. Analytical expressions of the neutral atom-scattering factors were employed and anomalous dispersion corrections were incorporated. A summary of the crystallographic data, collection conditions, and refinement parameters for complexes 1-4 are listed in Table 5. Crystallographic data (excluding structure factors) for the structure reported in this paper have been deposited with the Cambridge Crystallographic Data Centre as deposition Nos. CCDC 1997294, 2050695, 2050988, and 20501000. Copies of the data can be obtained free of charge on application to CCDC, 12 Union Road, Cambridge CB2 1EZ, UK (Fax: +44 1223/336 033; e-mail: deposit@ccdc.cam.ac.uk). Conclusions The purification and separation of substituted Q homologues is always a challenge and here we were able to separate the two more difficult larger homologues on silica gel using polar solvent gradients. The structural parameters for the family of CyB 5-8 Q [5][6][7][8] are similar to classical Q [5][6][7][8] in their portal and cavity dimensions. Electronically, however, significant electronegative potentials were found within the cavities of CyB 5-8 Q [5][6][7][8], unlike the classical Q[n], which are near neutral based upon comparative ESPs. The outer equatorial surfaces of CyB 5-8 Q [5][6][7][8] are relatively positive but not to the same extent as the classical Q[n]. The electropositive nature of the cyclobutano CH 2 plays an important role in the crystal packing of each of the four homologues through dipole-dipole interactions. As the electropositive CH 2 are nearly aligned with the cavity axes, packing for all four homologues formed parallel or near-parallel cavity columns. Given that the physical dimensions are similar to the classical Q[n], but that the cavity electrostatic potentials are negative, this latter difference could be significant with regard to decreases in the guest binding constants mentioned in the introduction.
2021-12-08T16:06:57.097Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "2e78bc6959a9da46c728fc365a4570401e04fd45", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/23/7343/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c59f0fb6d2178f3c6df22c666e63fca9cdb0c27a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
256273586
pes2o/s2orc
v3-fos-license
The essentials of acute oncology The general medical physician will often encounter patients who develop acute complications of their cancer diagnosis or anti-cancer treatment. Here we provide an overview of emergency solid tumour oncology to guide the initial management of these patients. Introduction Acute oncology describes a systematic approach to the investigation and management of patients who develop complications of their cancer diagnosis or anti-cancer treatment.This is a rapidly evolving landscape following the development of novel radiation and systemic anti-cancer therapies, which often have new and unpredictable toxicities.Given the rising incidence and prevalence of cancer, and the fact that most patients present to their local hospital with acute complications, this is increasingly relevant to general medical physicians.In this review, we provide an overview of emergency solid tumour oncology to guide the initial management of patients outside of cancer centres. Metastatic spinal cord compression Metastatic spinal cord compression (MSCC) occurs in 3-5% of patients with cancer. 1 MSCC is caused by epidural extension of vertebral metastases or following pathological compression fractures.MSCC is a medical emergency, because prompt investigation and early diagnosis facilitate the delivery of palliative therapies, which minimise symptoms and the risk of irreversible neurological disability. 2 MSCC is most common in breast, lung and prostate cancer, lymphoma or myeloma, but should be urgently investigated in any patient with cancer presenting with red flag symptoms (Table 1). 3,4efinitive investigation is a whole-spine magnetic resonance imaging (MRI) scan within 24 h of presentation, because patients can present with multilevel disease.Initial management comprises dexamethasone (16 mg followed by 8 mg twice daily) with proton pump inhibitor (PPI) cover, analgesics and antiemetics.Steroids should be avoided before biopsy in patients in whom a new diagnosis of lymphoma is suspected.All patients should be discussed with the neurosurgical team for assessment of Authors: A senior clinical fellow in clinical oncology, St Bartholomew's Hospital, London, UK; B acute oncology clinical nurse specialist, St Bartholomew's Hospital, London, UK; C cancer lead pharmacist, St Bartholomew's Hospital, London, UK; D consultant clinical oncologist, St Bartholomew's Hospital, London, UK spinal stability.In patients with no known cancer diagnosis, full radiographic staging should be performed to identify a primary cancer and suitable biopsy site.Tumour markers, including serum paraprotein, prostate-specific antigen (PSA), alpha-fetoprotein (AFP), lactate dehydrogenase (LDH), human chorionic gonadotrophin (hCG) and cancer antigen 125 (CA125), can aid diagnosis. The choice of therapy for MSCC is multifactorial, including the intrinsic radiosensitivity of the primary tumour, patient prognosis, the severity of MSCC, and spinal stability.In general, patients with good performance status and single-level disease should be offered neurosurgical decompression.Patients with poor performance status and multilevel disease are usually better candidates for palliative radiotherapy.Pretreatment neurological function is the strongest predictive factor for neurological outcome. 5Median overall survival has been estimated at 6 months and is better in ambulant versus non-ambulant patients. 6 Key points Patients with cancer presenting with back pain and red-flag symptoms should have a whole-spine MRI scan within 24 h of presentation.Neutropenic sepsis should be suspected in any unwell patient with cancer within 60 days of receiving systemic anti-cancer therapy; patients should receive broad-spectrum intravenous antibiotics within 1 h and should not wait for full blood count results. Patients with cancer are often prescribed glucocorticoids, especially as supportive care; all unwell patients should be assessed for adrenal insufficiency. Do not assume that nausea and vomiting are always treatment related; consider other differential diagnoses. Multiple targeted drugs have recently been identified with an elevated risk of pneumonitis and this should be carefully investigated for in any patient presenting with breathlessness or a dry cough. Vasogenic oedema and raised intracranial pressure Vasogenic oedema and raised intracranial pressure (ICP) can complicate primary brain tumours and secondary brain metastases. Vasogenic oedema is caused by disruption of the blood-brain barrier following tumour secretion of a variety of angiogenic factors, such as vascular endothelial growth factor (VEGF). 7 Raised ICP occurs following tumour mass effect or obstructive The essentials of acute oncology hydrocephalus and manifests as severe vasogenic oedema. Although presentation varies depending on the site of the lesion, common symptoms include headache, vomiting, changes in vision and seizures.Physical examination can reveal a decreased Glasgow Coma Score (GCS), VIth nerve palsy, papilledema, and the triad of bradycardia, hypertension and bradypnea known as Cushing's reflex.Patients should undergo urgent neuroimaging with computed tomography (CT) and/or MRI.Signs of imminent neurological compromise include midline shift, cerebral oedema, hydrocephalus and acute haemorrhage. 3A GCS score <12 warrants urgent intensive treatment unit (ITU) assessment.Initial management includes high-dose dexamethasone (16 mg followed by 8 mg twice daily) with PPI cover, analgesics and antiemetics.Prophylactic anticonvulsants are not recommended for routine use. 8In refractory cases, hypertonic saline and mannitol might be required. Superior vena cava obstruction Malignant superior vena cava obstruction (SVCO) is caused by direct tumour invasion, external compression or tumour thrombus. 9Increased venous pressure results in head, neck and upper limb oedema, cyanosis and swelling of subcutaneous vessels. 10The most common cause is bronchogenic malignancy, followed by lymphoma, thymic and germ cell malignancies. 11nvestigation is with contrast-enhanced CT, which can assess the primary tumour, the site of occlusion or stenosis and the extent of tumour thrombus.In patients who are unstable and present with life-threatening complications, such as airway obstruction, stridor, hypotension or decreased GCS, urgent endovenous recanalisation with SVC stent placement should be organised. 12In patients who are stable, accurate histological diagnosis is needed to direct anti-cancer therapy, because patients with small cell lung cancer, lymphoma or germ cell tumours might be more suitable for chemotherapy than for endovascular stenting.Prognosis varies significantly depending on the underlying tumour. 11If tumour thrombus is present, anticoagulation should be considered. Hypercalcaemia of malignancy Malignant hypercalcaemia occurs in 20-30% of patients with advanced cancer, 13 following tumour secretion of parathyroid hormone-related peptide (PTHrP) and vitamin D, or cytokine release for osteolytic metastases. 14Both mechanisms result in increased osteoblastic bone resorption and increased tubular calcium resorption. 15Symptoms include depression, musculoskeletal pain and abdominal pain. 16Investigation with serum total calcium, PTH, PTHrP, phosphate, vitamin D, serum creatinine and estimated glomerular filtration rate (eGFR) will enable a correct diagnosis in most cases.Grading is based upon local laboratory guidelines.In severe hypercalcaemia, patients are usually volume deplete, and intravenous fluid therapy forms the mainstay of initial management, promoting calciuresis. 17Following 24 h of parenteral fluid therapy, intravenous bisphosphonates are used first line to reduce bone resorption, and calcium levels fall steadily over a period of 1-5 days (Table 1).In refractory cases, repeat bisphosphonates, calcitonin, glucocorticoids or denosumab (off label) can be trialled. 17 Chemotherapy Cytotoxic chemotherapy causes cancer cell death by interfering with the cell cycle and inhibiting cell division.However, chemotherapy also causes cytotoxicity in rapidly proliferating non-cancerous epithelial cells and acute emergency toxicity can present with nausea and vomiting, diarrhoea, pneumonitis or myelosuppression (Table 2). 3,4,18The common toxicities of specific chemotherapeutic agents in solid tumours are presented in Table 3. 19 Radiotherapy There have been significant advances in linear accelerator technology with the development of image-guided intensitymodulated and stereotactic radiotherapy.Despite this, radiation toxicity can occur and is dependent on the site of treatment.Acute presentations typically include gastrointestinal toxicity and pneumonitis (Table 2).Radiotherapy can rarely cause acute oedema of the brain and spinal cord, and patients presenting with new neurological symptoms should have repeat brain or spinal imaging with CT or MRI.Treatment involves dexamethasone 4-8 mg twice daily. 3Increasing the dexamethasone dose above 16 mg/day has not been shown to provide additional benefit. 20 Targeted therapy Targeted therapies interact with specific molecules involved with cell proliferation and growth.Toxicity is variable depending on the mechanism of action, and an overview is provided in Table 4. 19 The most common oral targeted therapies are tyrosine kinase inhibitors (TKIs), which typically cause rash, diarrhoea, fatigue, nausea, sore mouth and paronychia as side effects.Multiple targeted drugs have recently been identified with an elevated risk of pneumonitis and this should be carefully investigated for in any patient presenting with breathlessness or a dry cough. Steroid therapy Patients with cancer are often prescribed glucocorticoids, especially as supportive care.The risk of adrenal insufficiency should be considered in any unwell patient with cancer.Clinicians should enquire whether patients carry a Steroid Emergency Card. 21Top-up The essentials of acute oncology glucocorticoid therapy should be prescribed during 'sick days', such as when presenting to the emergency department with acute illness. Immunotherapy The acute toxicity of immunotherapy is discussed in another article within this edition. Neutropenic sepsis Febrile neutropenia or neutropenic sepsis is a medical emergency and represents a potentially life-threatening complication of systemic anti-cancer therapy.Neutropenic sepsis can be diagnosed in a patient presenting with a temperature over 38.0°C and an absolute neutrophil count (ANC) of <1.0×10 9 /L.However, fever might not always be present and can be masked by concomitant steroid therapy.Therefore, neutropenic sepsis should be suspected in any unwell patients within 60 days of receiving systemic anti-cancer therapy.Initial evaluation should involve a detailed cancer history, including chemotherapy regimen and any prophylactic antibiotic administration.Physical examination should assess circulatory and respiratory function, and patients with should receive prompt resuscitation.Intravenous broad-spectrum antibiotics should be given within 1 h of blood cultures being taken. 22Their administration should not wait for the full blood count results.Empirical antibiotic treatment should be based on local guidelines, epidemiological patterns of causative pathogens and antimicrobial resistance. 23nvasive aspergillosis should be considered in patients with prolonged, profound neutropenia; Pneumocystis jirovecii pneumonia should be considered in patients treated with corticosteroids; and invasive candidiasis should be considered in those with mucositis.C reactive protein (CRP) levels lack specificity, and an elevated CRP in isolation should not be the sole trigger to prompt initiation of antimicrobial therapy.The use of procalcitonin is currently exploratory. 24dvice about the use of granulocyte colony-stimulating factor (G-CSF) should be sought from the acute oncology team; however, G-CSF is indicated if the patient is septic, has an ANC <0.5×10 9 /L or is at elevated risk of complications. 22G-CSF can be stopped once the ANC is above 1×10 9 /L.Assessment of risk of medical complications using the Multinational Association of Supportive Care in Cancer (MASCC) should be performed.Select low-risk patients can be managed as outpatients following a period of observation after initial empiric therapy. 22Mortality varies depending on the MASCC score: under 5% if the MASCC score is ≥21, but potentially up to 40% if the MASCC score is <15. 23,25Prognostic factors include the degree and duration of neutropenia, older age, poor performance status, obesity and metastatic bone marrow infiltration. 25■ Acute oncology, red flag, neutropenic sepsis, adrenal insufficiency, pneumonitis DOI: 10.7861/clinmed.2022-0561 Table 3 . Common solid tumour chemotherapy toxicities 19 Class of drug Drug name Cancer sites Common toxicity DPD = dihydropyrimidine dehydrogenase. Table 4 . Common solid tumour-targeted therapies and their toxicities 19 Drug class/target Drug name Cancer sites Common or significant toxicities GIST = gastrointestinal stromal tumor; TKI = tyrosine kinase inhibitor.
2023-01-27T06:16:01.770Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "3791c8387f330ce7dfda0c85d3b45ee77866635b", "oa_license": "CCBYNCND", "oa_url": "https://www.rcpjournals.org/content/clinmedicine/23/1/45.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb4a8851a593aa533fe08f8c052af99e1935e7c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20753829
pes2o/s2orc
v3-fos-license
Prediction of transcription factor bindings sites affected by SNPs located at the osteopontin promoter This data contains information related to the research article entitled “Osteopontin splice variants and polymorphisms in Cancer Progression and Prognosis” [1]. Here, we describe an in silico analysis of transcription factors that could have altered binding to their DNA target sequence as a result of SNPs in the osteopontin gene promoter. We concentrated on SNPs associated with cancer risk and development. The analysis was performed with PROMO v3.0.2 software which incorporates TRANSFACT v6.4 of. We also present a figure depicting the putative transcription factor binding according to genotype. Subject area Biology, Molecular Biology More specific subject area Effect of SNPs in binding of transcription factors for the gene osteopontin Type of data Value of the data These data describe how putative DNA-binding sites for transcriptional factors can be created or interrupted by the changes in sequences generated by SNPs in the promoter of osteopontin. Differential binding among SNPs genotypes can potentially explain why these SNPs have been associated with changes in the risk of cancer for a specific population. This analysis is an example of how important databases, such as those containing SNP genotypes and the predictive tools for DNA-binding sites for transcriptional factors in a specific sequence, could be used to try to select potential signaling pathways modulating the development of cancer. Data The table provided in this article is a list of the transcription factors predicted to bind a DNA sequence at the SNPs contained in the osteopontin promoter. We analyzed only those SNPs that statistically in a population have been shown to have an effect on cancer risk and prognosis for the carriers. For each SNP we present both sequences. Each analysis contains the rs ID and the nucleotide position in reference to the osteopontin promoter; a schematic representation of the binding of the transcription factor to their target sequence; and an analysis of how similar the binding site is compared to its canonical binding sequence. Experimental design, materials and methods Analysis of SNP sequences was performed using software PROMO v3.0.2, (which utilizes TRANS FAC v6.4) [2,3] For each osteopontin gene promoter SNP, the sequences carrying each allele were loaded as the query sequence to search for potential binding sites. The prediction was carried out considering only sites and only human transcription factors. The output of this analysis is presented in Table 1. Each analysis contains the rs that corresponds to each SNP and its position relative to the transcription start site of osteopontin. For each SNP, we present the respective results for both sequences loaded as the query sequences. A schematic representation (boxes in color, also indicated with numbers) of the binding of the transcription factor to the target sequence, and a list of the putative transcription factors binding to the sequence. For each transcription factor site, several predicted parameters are reported. The transcription Table 1 Transcription factors binding prediction to sequences associated to SNPs genotypes located in the promoter of the osteopontin gene. factor name with the database accession number in brackets; the start and end positions of the putative binding sequences; Dissimilarity (%), which corresponds to the rate of dissimilarity between the putative and consensus sequences for a given transcription factor; Sequence, the nucleotide sequence of potential binding site; Random Expectation (RE) indicating the expected occurrences of the match in a random sequence of the same length as the query sequence according to the dissimilarity index, presented the RE equally (equi-probability for the four nucleotides) and RE query (nucleotide frequencies as in the query sequence). Markedly different changes are highlight in grey and the SNP is highlight in red. In Fig. 1 we depict the integration of information obtained from this predictive analysis and data previously reported for transcription factors binding to the osteopontin promoter.
2018-04-03T02:01:01.352Z
2017-08-02T00:00:00.000
{ "year": 2017, "sha1": "4a44594ecd5d91dd000a95f9bdd9020d6bdb70ab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2017.07.057", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a44594ecd5d91dd000a95f9bdd9020d6bdb70ab", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3484898
pes2o/s2orc
v3-fos-license
Acute Leriche Syndrome due to Embolisation of an Atrial Myxoma The presentations of cardiac myxoma are diverse, from asymptomatic to a variety of symptoms due to embolisation. Total occlusion of the abdominal aorta due to embolisation of an atrial myxoma is a rare but lifethreatening event which demands an urgent diagnosis and prompt intervention with embolectomy. In this paper a patient with acute onset paraplegia is presented. CT angiography revealed a large totally occlusive thrombus in the infrarenal abdominal aorta extending into the common iliac arteries. A thrombus in the left atrium was identified as the source of the embolus. Case Report A 68-year-old patient was admitted to the emergency department after a fall during a bike ride because of sudden dizziness. Initially she only had a feeling of discomfort and weakness in the lower extremities en was still able to stand. Shortly afterwards this progressed to paralysis in both legs. Her medical history included diabetes mellitus and active smoking. On admission, the patient was in poor clinical condition with mild tachycardia and fluctuations in blood pressure. She was confused, anxious and sweaty but obeying commands. Clinical examination showed pallor, coldness and paralysis of both legs. Femoral, popliteal and pedal pulses were absent. CT angiography revealed a large totally occlusive thrombus in the infrarenal abdominal aorta extending into the common iliac arteries. A thrombus in the left atrium was identified as the source of the embolus (Figures 1 and 2). Electrocardiography showed ST-segment elevation, consistent with an acute lateral myocardial infarction, and the patient had significantly elevated Troponin I levels. Procedure Intravenous heparin infusion was started and the patient was transferred to the operation theatre for a bilateral transfemoral embolectomy. Longitudinal groin incisions were made to expose the common femoral arteries and their bifurcation. A size 5 Fogarty was then passed up to the aorta bilaterally with retrieval of a transparent, gelatinous embolus. Biopsies were taken. There was no retrieval of any thrombus distally but there was adequate backflow. Closure of the femoral arteries was done with patch angioplasty after endarterectomy because of severe occlusive pathology. Fasciotomy was not considered because there were no signs of compartment syndrome in the lower extremities. At the end of the procedure both lower extremities were warm and had good capillary refill. Journal of Vascular Medicine & Surgery The patient required inotropic support and was transferred -intubated -to the ICU. Post-operative Trans-esophageal echocardiography showed a friable mass with irregular margin and central necrosis in the left atrium (2.3 cm × 2.2 cm) ( Figure 3). There was apical ballooning with a large zone of apical, anteroseptal and anterolateral akinesis and contraction of the basal segments only. Left ventricle ejection fraction was reduced to 26%. Coronary angiography showed no significant coronary heart disease and the patient was diagnosed with Takotsubo cardiomyopathy [1]. Histological analysis of the specimen demonstrated short cords of globular myxoma cells embedded in a myxoid stroma. These findings were consistent with the diagnosis of embolisation of an atrial myxoma. The patient was transferred to a university hospital for resection of the myxoma. The resection was performed through median sternotomy with the aid of cardiopulmonary bypass and using a right atriotomy and transseptal approach. The myxoma was identified, attached laterally near the pulmonary vein, and resected in toto. Complete resection was confirmed visually and by ultrasound. After the operation the patient was admitted to the intensive care unit for weaning and vasopressor support. After extubation this support could easily be stopped and there were no clinical signs of cardiac decompensation. Transthoracic echocardiography showed no dilatation of the atria and a good in-and outflow of both ventricles. On day three, the patient was transferred to the regular hospital ward where the postoperative course was complicated by a pneumothorax for which a chest tube was placed. The patient also developed urosepsis which was treated with IV antibiotics. On day fifteen she was able to leave the hospital in good clinical condition. Follow-up transthoracic echocardiograph after one month showed normal dimensions of both atria and a good systolic function of both ventricles. Discussion Myxoma is the most common primary cardiac tumour with 90% of the cases occurring in the atria, mainly the left atrium. Myxomas are neoplasms of endocardial origin. The malignant potential remains doubtful, but there are a few reports of the remote growth of myxomatous material that has embolized [2]. A study by Pinede et al. of 112 cases of atrial myxoma from 1959-1998 showed that systemic embolisation is a common presentation apart from cardiac obstruction and systemic symptoms [3]. More than two thirds of myxomatous emboli migrate to the central nervous system, but any arterial bed may be affected. Recorded cases document emboli in the upper and lower extremities, aortic saddle, coronary arteries, kidneys, liver, spleen, eye, skin, and more [2]. Although the higher prevalence of atrial myxoma in female, systemic embolisation occurs more frequent in male. The treatment of choice for myxomas is surgical removal. After the diagnosis has been established, surgery should be performed promptly, because of the possibility of embolic complications or sudden death. Acute occlusion of the abdominal aorta due to embolisation of an atrial myxoma is rare but several cases have been reported [4][5][6][7]. The clinical presentation is mostly sudden and characterized by pain in the lower extremities, palor, paraesthesia and paralysis. Depending on the degree of occlusion, distal pulses will be weak or absent. However, the clinical presentation can be misleading as neurologic symptoms of the lower extremities can predominate and the diagnosis can be mistaken for stroke or a central nervous system lesion. Management of this condition involves intravenous heparin infusion followed by an emergency embolectomy. Aortoiliac embolectomy via bilateral common femoral arteriotomies is the preferred procedure. If adequate inflow cannot be reestablished, an extra-anatomic bypass is performed [8]. Ischemic complications including gastrointestinal malperfusion, renal infarction, and paralysis secondary tot spinal cord ischemia can occur. Rhabdomyolysis is a frequent complication, and the use of additional contrast associated with catheter-based techniques of revascularization may place the patient at increased risk of further renal insufficiency [9]. In conclusion, the diagnosis of embolisation of a myxoma should be strongly suspected in acute onset of paraplegia, especially in younger patients. Surgery should be performed urgently to retrieve the emboli and to decrease the risk of ischemia and reperfusion injury.
2019-03-07T14:02:55.148Z
2015-08-02T00:00:00.000
{ "year": 2015, "sha1": "702c4731fbf2447de503ea64b998745bdf277772", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/acute-leriche-syndrome-due-to-embolisation-of-an-atrial-myxoma-2329-6925-1000214.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dc16edae2f53dac5977dbbf4edd93f11414d2e9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248136719
pes2o/s2orc
v3-fos-license
The Influence of the Acceleration Admixture Type and Composition of Cement on Hydration Heat and Setting Time of Slag Blended Cement This article presents recent research on cements containing GGBFS and their modifications with accelerating admixtures. The initial setting time and hydration heat evolution results are presented for cement CEM II/B-S and CEM III/A manufactured with three Portland clinkers of various phase compositions. The research was carried out at 8 °C and 20 °C. The main objective is to assess the behavior of blended cements in cooperation with modern admixtures that contain nucleation seeds. The authors aimed to compare and evaluate different methods to reduce setting time, namely, the effects of temperature, the specific surface area of cement and GGBFS, the type of Portland clinker, the content of GGBFS, and presence of accelerators. Many of these aspects appear in separate studies, and the authors wanted a more comprehensive coverage of the subject. Those methods of reducing the setting time can be ranked: the most effective is to increase the temperature of the ingredients and the surroundings, the second is to reduce the GGBFS content in cement, and the use of accelerators, and the least effective is the additional milling of Portland clinker. However, of these methods, only the use of accelerators is acceptable in terms of sustainability. Prospective research is a detailed study on the amounts of C-S-H phase and portlandite to determine the hydration rate. Introduction Concrete is one of the most popular building materials. Since the patenting of Portland cement, increasing technical and environmental requirements have forced modifications to this material. One of the first modifiers used was accelerating admixtures. In 1885 the first compound for this purpose, calcium chloride, was used [1]. Initially, accelerating admixtures were used due to the lower specific surface area of cement and to concrete at reduced temperatures [1,2]. Today, they are also used in precast production and to reduce formwork rental time [3,4]. However, calcium chloride was soon found to cause steel corrosion, necessitating the invention of other compounds to replace it. However, the use of calcium chloride continues to be investigated, including protecting and repairing objects [5,6]. Accelerating admixtures contain the following components: nitrates and nitrites, formates; carbonates; diethanolamine (DEA); triethanolamine (TEA); triisopropanolamine (TIPA); bromides; fluorides; alkali hydroxides [3,4,[7][8][9][10][11]. Currently, nanomaterial-containing admixtures are developing more rapidly. They are referred to as C-S-H seeds, crystal seeds, nanosize C-S-H, nanocrystals, nucleation seeds, nano modifiers, and other names. New developments are still being sought [12][13][14][15][16]. A leading trend to mitigate the negative environmental impact of cement production is the use of waste materials [17,18]. Due to the widespread use of ground granulated blast furnace slag (GGBFS), cement containing it was the core material used in this study. The influence of slag on the heat of hydration is described by many authors [19][20][21]. These sources indicate that GGBFS reduces the amount of hydration heat released and the intensity of hydration heat evolution. Some also indicate the appearance of the characteristic second peak on the heat release graph caused by the GGBFS reaction for several slag cements [22,23]. Furthermore, the initial setting time is delayed [24,25], although some sources present the beneficial effect of a small amount of slag to shorten it; this may be attributed to an effect similar to crystal seeds [26]. The effect of the specific surface area of GGBFS on the initial setting time and the amount of hydration heat released is also reported. The hydration reaction rate increases as the fineness of the slag increases. This is related to the larger surface area available for hydration reaction to occur [23,27]. The specific surface area of cement has a similar effect but is more significant than that of slag due to the higher reactivity at the beginning of the hydration process [28][29][30]. Although the influence of specific surface area is well known, it should be assessed to determine whether it is possible to override it with accelerators. The phase composition of Portland clinker varies according to the feed of the raw materials and methods used in its production. Clinker is a composition of tricalcium silicate (alite-C 3 S), dicalcium silicate (belite-C 2 S), tricalcium aluminate (celite-C 3 A), and tetracalcium aluminate (brownmillerite-C 4 AF). Phase composition ranges for C 3 S are: 45-70%, C 2 S: 10-30%. C 3 A: 5-12%, C 4 AF: 5-12% [31,32]. The heat of hydration released by the individual phases is shaped as follows: C 3 S 500 J/g, C 2 S 260 J/g C 3 A 870 J/g, C 4 AF 420 J/g [2]. C 3 S has the greatest share in shaping the heat evolution of hydration due to its high heat release rate and its highest content, and C 3 A due to its highest heat release rate. In the initial hydration stage, C 3 A has the greatest influence due to the fastest reaction [32,33]. There are few comprehensive comparisons of the hydration heat and setting time with different phase compositions. Single test results are reported for specific cements. Cement containing more C 3 A achieved the highest amount of heat of hydration released. The highest heat release does not always coincide with the fastest initial setting time [34]. As the ambient temperature increases, the rate of chemical reactions increases. The effect of increasing the temperature during the hydration reaction and its effect on the heat release and initial setting time is better described in the literature. Effects are described for Portland cement [35,36] and cement containing non-clinker components [37,38]. However, comparisons of increased temperature and other factors (specific surface area, clinker type, accelerators, taken into account in the present research) affecting the acceleration of the hydration reaction are lacking. Admixtures containing C-S-H seeds are currently the subject of considerable research. Most of them report beneficial effects on the properties of Portland cement [39][40][41][42]. Fewer sources show the interaction of these admixtures with blended cement. Cements containing fly ash [43,44] and GGBFS [45,46] were the subject of such research. Improved mechanical properties, reduced initial setting time, and effect on consistency and durability of cement composites are described. In most cases, there is a beneficial effect. Calcium nitrate is a well-studied concrete accelerating admixture. Its performance is well documented at average and reduced temperatures and in cooperation with Portland cement and blended cement. Beneficial effects on mechanical properties and initial setting time are described [7,10,47,48]. However, the possibility of calcium nitrate overdose should be considered, which results in adverse effects [7,11]. In the current study, this admixture was used to compare the effectiveness of the others. Cement kiln dust (CKD) is a waste generated during cement production in the rotary kiln [49][50][51] and is considered a hazardous waste [52]. The effect of CKD on concrete properties varies and depends on the raw materials used in cement production, the type of production technology, and the dust capture installation [49]. Currently, CKD is used in soil stabilization, low-strength concrete manufacture, asphalt concrete production, or artificial aggregate preparation [53,54]. Furthermore, CKD is useful in the preparation of alkali-activated blast furnace slag binders (AAS) [52]. Therefore, they may help accelerate the hydration of cement containing GGBFS. The effect of accelerating the initial setting time and improving mechanical properties has been reported [55,56]. CKD is also used in collaboration with cement containing fly ash, GGBFS, and silica dust [57,58]. However, massive amounts of added CKD can degrade composite properties [58,59]. Therefore, CKDs have been involved in research with contradictory reports on their effectiveness and performance. Sodium hydroxide, which, when dissolved in water, increases its pH, may have a similar activating effect on the latent hydraulic properties of GGBFS [60,61]. It is also used in AAS [61,62]. The beneficial effect of sodium hydroxide can only be observed during the initial hydration period. After a longer period, the strength properties deteriorate [63,64]. Sodium hydroxide is also used to activate fly ash cement [65,66]. The motivation for this study was the growing popularity of modern admixtures to accelerate setting and hardening containing nucleation seeds. C-S-H seed admixtures should be compared with other methods; this is evident in the number of recent articles published on the subject. The main objective is to assess the behavior of blended cements in cooperation with this admixture because little comprehensive research for GGBFS blended cement has been reported. The authors wanted to address mainly cement containing GGBFS in order to compare different methods of reducing the setting time: the effects of temperature, the specific surface area of cement and GGBFS, the type of Portland clinker, the content of GGBFS, and presence of accelerators. Many of these aspects appear in separate studies, and the authors wanted a deeper coverage of the subject. This comparison is the best value of this article. In previous work, the authors have addressed the topics of activators in terms of the microstructure of hardened cement paste, strength, and initial setting time [64,67,68]. Prospective research is a detailed study on amounts of C-S-H phase and portlandite to determine the hydration rate. XRD and TGA-DTG methods for different ages of blended cement pastes may be used. Cement The cement used in the research was manufactured in the laboratory from Portland clinker, ground granulated blast-furnace slag (GGBFS), and anhydrite (set regulator). Portland clinker was replaced with GGBFS in amounts of 0% (equivalent for CEM I), 35% (equivalent for CEM II/B-S), and 65% (equivalent for CEM III/A). The properties of GGBFS exceed the minimum values specified by the standards. GGBFS reactivity indices are greater than required by EN 15167-1: 62.8% after seven days (against 45% required) and 88.3% after 28 days (compared to 70% required). The amorphous phase content of the slag is 98.5%, which is also greater than required by EN 197-1 67%. The chemical characteristics and specific surfaces of the GGBFS used are given in Table 1. The GGBFS XRD analysis is presented in Figure 1. Three different Portland clinkers were used in the investigation. Clinkers differed in their phase composition. Every clinker was ground to Blaine's specific surface of 3000 cm 2 /g, 4000 cm 2 /g, and 5000 cm 2 /g. The phase composition, determined by the Bogue equations, of the Portland clinkers is given in Table 2. Their chemical composition is presented in Table 3. Anhydrite amounts were established for each blend separately to obtain 2% SO 3 in the cement. The characteristics are presented in Table 4. All materials involved in the research are presented in Figure 2 (clinkers and GGBFS) and Figure 3 (accelerators). Accelerators Four different accelerators were used in the research. 1. A modern set and hardening acceleration admixture (symbol S) is available on the market. It contains crystal seeds in the form of C-S-H nanoparticles. 3. The 20% sodium hydroxide solution (NaOH; symbol N) was added to reach the amount of 5% sodium hydroxide per cement mass. 4. Cement kiln dust (CKD) (symbol D) was obtained from one of the polish cement plants. CKD was introduced as an ingredient in cement in the amount of 10% of the total mass of cement. The chemical composition of CKD is given in Table 5. CKD XRD and TG-DCS analyses are presented in Figures 4 and 5. The resemblance of the chemical composition described before is visible. Mixtures Cement pastes were used as research materials. These were mixed using the ingredients described above. The compositions of the pastes for the initial setting time tests are given in Table 6. The amounts of water were established while obtaining the standard consistency according to EN 196-3. The results of the water demand are given in Table 7. Hydration heat tests were conducted on a 5 g cement sample with a water-cement ratio of 0.5 and a proportional amount of accelerators, according to Table 6. Amount of water depends on cement type and fineness, accelerator used and temperature during the test. Results given in Table 7. * all types of cement were prepared from all clinkers; x denotes which. ** 3k, 4k, 5k indicates Blaine's specific surface area of cement-3000, 4000, 5000 cm 2 /g. *** NG-typically ground slag to Blaine's specific surface area 3200 cm 2 /g. The remaining mixtures were prepared with AG-additionally ground slag of Blaine's specific surface area 3870 cm 2 /g. **** S, C, N, D-symbols of accelerators. Methods Standard consistency and initial setting time of cement were tested according to procedures given in the standard EN 196-3. The tests were carried out using an automated Vicat apparatus, manufactured by Matest, Arcore, Italy. Hydration heat tests were conducted using an isothermal calorimeter TAM Air. The device uses stirring ampoules that allow heat measurement from the moment of water and cement contact. The tests were continuous for 72 h; the results were recorded each minute. All components were stored at the appropriate temperature (8 • C or 20 • C) for at least 24 h before testing. The mixtures were prepared in the room at a stable temperature. Finally, the test equipment was placed in a room with a suitable temperature. Initial Setting Time The influence of clinker type on the initial setting time of the unmodified pastes at 20 • C is shown in Figure 6. Cements made from clinker C1, which contained 13.5% C 3 A, were characterized by the shortest initial setting time. The cements made of the other two clinkers (containing 4.0% and 2.4% C 3 A) exhibited longer initial setting times. These relationships were true when Portland cement CEM I, Portland slag cement CEM II/B-S, and blast furnace CEM III/A cement were tested at 20 • C, and CEM I cements at 8 • C. More significant relative increases in differences were observed for cement containing GGBFS. The initial setting time increased with increasing GGBFS amount in cement and decreasing ambient temperature and specific surface area (Figures 6 and 7). More considerable relative differences were observed at 20 • C. This was a general decrease in the hydration reaction rate at the reduced temperature. The specific surface area of ground granulated blast furnace slag did not affect the initial setting time due to its near-neutral nature at the initial stage of cement pastes curing (Figures 6 and 7). The initial setting time was less variable at lower temperatures (Figure 7). The relationships for GGFBS content and clinker-specific surface area corresponded to those for the higher temperature. The clinker type affected the same for Portland cement CEM I but differently for the other two types of cement. For CEM II/B-S, clinkers with a lower C 3 A content (C2-4.0% and C3-2.4%) exhibited shorter setting times than cement made from C1 (13.5% C 3 A). For CEM III/A cement, clinkers C1 and C2 showed shorter initial setting times than clinker C3. The results of the accelerator-modified cement tests are presented in Figures 8 and 9. All activators caused a reduction in the initial setting time. For all cements, the most significant effect in reducing the initial setting time occurred with sodium hydroxide (reducing the initial setting time by approximately 300-400% depending on the type of cement). The next most effective accelerator was an admixture containing nucleation seeds for CEM II/B-S and CEM III/A cements made from C1 and C2 clinkers, as well as for CEM II/B-S cement made from C3 clinker (reduction of approximately 50-100%). In the case of CEM III/A cement made of C3, calcium nitrate replaced it. The least effective was to accelerate the initial setting time with CKD (reduction in the initial setting time by approximately 10-50%). Only in the case of CEM III/A cement made of C1 clinker did CKDs reduce the setting time more than calcium nitrate. In the case of cements modified with setting activators, the relationships followed a pattern similar to that at higher temperatures. All the activators used reduced the initial setting time. In cement made with clinker C1, the most effective activator was nucleation seeds, and the second most effective was calcium nitrate. In cements made with the other two clinkers, the relationship reversed and the beneficial effect of a higher C 2 S content in cement containing C3 clinker, in cooperation with calcium nitrate, was more pronounced than at higher temperature. CKDs reduced the initial setting time the least. The effect of sodium hydroxide was not studied at lower temperatures (Figures 8 and 9). Hydration Heat Evolution The results of the evolution of the hydration heat of unmodified cements made from C2 clinker with specific surface areas of 3000, 4000, and 5000 cm 2 /g are presented in Figures 10-13. The graphs show that increasing the fineness of Portland clinker resulted in shortening the induction period, increasing the maximum heat release in the post-induction period, and increasing the heat of hydration after 72 h. Furthermore, there were more significant differences between cements made from clinkers with a specific surface area of 3000 and 4000 cm 2 /g than between 4000 and 5000 cm 2 /g; this was the result of the more significant relative difference between the smallest and average and average and largest specific surface areas. The higher content of GGBFS in cement caused an increase in differences in the amount of hydration heat released between cements containing clinkers with a specific surface area of 4000 and 5000 cm 2 /g, compared to cements without it. Figures 14 and 15 show the graphs of the heat of hydration of CEM III/A cement made from C2 clinker with a specific surface area of 4000 cm 2 /g and slag with specific surface areas of 3200 and 3870 cm 2 /g. The results were very similar. The plot shows that the use of slag with a lower specific surface area resulted in a slightly shorter induction period, a slightly higher maximum hydration heat release in the postinduction period, and a similar hydration heat after 72 h, compared to cement with slag treated with additional grinding. The evolution characteristics depended on the phase composition of the cement. These relationships are shown in Figures 16-19. Cements made from clinker C1 had the highest maximum heat of hydration in the post-induction period and the highest heat released in 72 h. Cements made of C2 clinker were the second in order with respect to the above characteristics, and those made of C3 clinker were the last. It was valid for all types of cement, regardless of the amount of ground granulated blast furnace slag, although the differences were most evident for Portland cement CEM I. The heat of hydration evolution graphs for CEM III/A, made from C1 and C3 clinkers, did not reveal a second maximum peak in the post-induction period, which was the opposite of the C2 clinker. As described earlier, the relationships occurring at an ambient temperature of 20 • C were also valid at 8 • C. Decreasing temperature caused an elongation of the induction period, an elongation of the postinduction period, a decrease in its maximum, and a decrease in the heat of hydration after 72 h. More evident than at higher temperatures was the relative differences in heat release between cements with different specific surface areas after 72 h. For all cements, it could be seen that up to about 36 h, the hydration heat was higher for cements made from C2 clinker compared to C1 clinker, which was much smaller at a higher temperature for cements CEM II/B-S and CEM III/A cements. Later, the relationship reversed. The results of the hydration heat tests for accelerator-modified cements are presented in Figures 20-31 for CEM II/B-S and CEM III/A, separately for cements containing different Portland clinkers. In the case of cements modified with admixture containing nucleation seeds (yellow curves), the induction period was shortened, the maximum hydration heat evolved in the post-induction period increased and occurred earlier, and the amount of hydration heat evolved after 72 h was increased. The accelerator was slightly more effective in CEM III/A than in CEM II/B-S and cement made from clinkers containing less C 3 A (C2 and C3). The effect of calcium nitrate (green curves) differed to a greater extent depending on the type of clinker and the content of ground granulated blast furnace slag in the cement. In the case of CEM II/B-S cement, it was most effective in cooperation with cement made from C3 clinker, which contained the most C 2 S. It increased the most in the post-induction period and heat release after 72 h. In cooperation with the other two clinkers, the maximum heat released decreased, and the heat after 72 h remained unchanged. In the case of all cements, it shortened the induction period, and in cements made of C2 and C3 clinkers, the maximum heat release occurred earlier. Calcium nitrate had a similar performance for CEM III/A cements made from clinkers C2 and C3. There was an increase in the maximum heat release in the post-induction period and the heat released after 72 h. In association with clinker C1, the heat of hydration after 72 h was lower and the maximum heat release in the post-induction period did not change compared to the reference sample. Sodium hydroxide (burgundy curves) greatly shortened the initial setting time and completely changed the characteristics of the calorimetric curve. In the case of CEM II/B-S and CEM III/A cements made from C1 clinker, the induction period could not be distinguished. In the other types of cement, this period was concise and characterized by a much higher amount of heat released than when modified with the other activators. A higher heat of hydration characterizes the CEM II/B-S cement after 72 h. After 72 h of CEM III/A cement, the heat of hydration did not increase. The calorimetric curves of the CKD-modified cements (blue curves) were similar to that for the reference samples (black curves), regardless of clinker type and the content of GGBFS. No clear correlation of the effectiveness of dust as an activator with the phase composition of the clinker could be seen. The maximum heat release in the post-induction period did not decrease in any of the cases. In addition, no elongation of the induction period was recorded. In cases where it was shortened, it was of a small magnitude. A slight decrease in the heat of hydration was observed after 72 h only in the case of cement made of C1 clinker. CKD showed slightly higher efficiency in CEM II/B-S cement, which was visible in the earlier occurrence of the maximum release of the heat of hydration. This change was not visible in the case of cements with a higher GGBFS content. In this research, the beneficial influence of CKD was observed; however, it is necessary to recall the different actions of various CKD [49]. Summary Summaries of accelerator effects on hydration heat and initial setting time are presented in Table 8 for 20 • C and 8 • C. All accelerators shortened the initial setting time. The duration of the dormant period was shortened in almost all cases. Only CKD modification did not result in a change in duration. The maximum value of heat in the post-dormant period (second peak) was always greater with accelerator usage. The only exception was calcium nitrate in cooperation with CEM II/B-S based on clinker C1 and C2. It appeared to be caused by the potential overdose of calcium nitrate in ratio to clinker. It was not visible for CEM III/A because of the too high GGBFS content. All accelerators, except CKD, caused an earlier occurrence of the second peak. A similar exception was for CEM II/B-S based on calcium nitrate C1 and C2 clinker. The total heat evolved after 72 h was lower, higher, or similar in various cases. There was no rule behind it. Initial Setting Time The initial setting time depends on the content of tricalcium aluminate (C 3 A) in the cement [32,69]. As its amount increases, the amount of ettringite formed increases, which causes a faster build-up of the spatial crystalline structure [70,71]. Increasing the specific surface area of the clinker shortens the initial setting time, the larger the cement grain area available for the hydration reaction. The initial setting time at the lower temperature is not significantly different using the different clinkers for Portland slag cement and blast furnace cement. These various behaviors of blended cements at different temperatures should be the subject of more detailed research. Increasing the proportion of GGBFS, a nearly inert component in the initial stage of hydration [72,73], increases the amount of SO 3 per unit mass of C 3 A derived from the clinker. Consequently, in cement containing clinker with a higher amount of this phase, the hydration reaction is proportionally faster than in Portland cement. Increasing the content of GGBFS causes an increase in the distance between the clinker grains, which have to be filled with hydration products. A smaller particle size results in an increase in the surface area of the grains available for the reaction to occur, and the reaction rate is faster [2,32]. The explanation for accelerators efficiency varies depending on their chemical nature and physical properties. In the case of sodium hydroxide, which acts mainly on the C 2 S and C 3 S phases, rapid cement setting occurs [3,74]. Its cooperation with cement is similar due to the similar sum content of these phases. A similar explanation exists for the use of calcium nitrate, which increases efficiency as the content of these two phases, especially belite, increases [7]. This was particularly evident in the case of the C3 clinker (18.2% C 2 S), for which it was the second most effective accelerator. Nucleation seeds-containing admixture introduces additional microparticles into the batch water. As a result, it serves as initial crystallization points. Therefore, hydration products can not only be on the surface of cement grains [64]. CKDs that also contain alkalis and sulfides that serve as slag activators can act similarly [2,3,72]. As a result of activators containing nucleation seeds and calcium nitrate, it was possible to bridge the difference between the CEM-CEM II and CEM II-CEM III cement pairs. However, CKD did not work as well with all of them. Hydration Heat Cements made with C1 clinker were characterized by the highest heat released during the first 72 h and the highest maximum hydration heat released during the post-induction period. Cements containing C2 clinker were second, and those with C3 clinker were third. This observation was independent on the content of ground granulated blast furnace slag. Increasing the specific surface area of the clinker resulted in a shortening of the induction period, an increase in the maximum heat release in the post-induction period, and an increase in the heat released after 72 h, which is confirmed in the literature [2,32]. The higher GGBFS ratio caused greater differences in hydration heat for cements of different specific surface areas. It can be explained by averaging the specific surface area by using slag with the same specific surface area in all cements in increasing amounts. Furthermore, the plot for CEM III/A cement showed a characteristic second maximum occurring in the post-induction period. The development of hydration heat release was caused by differences in the content of the C 3 A and C 3 S phases in clinkers. Clinker C1 contained the most considerable amount of the C 3 A phase-13.5% and an average C 3 S content-64.4%. Clinkers C2 and C3 contained a similar amount of C 3 A-4.0% and 2.4%, respectively, but differed significantly in the content of C 3 S (68.7% and 58.0%, respectively)-the phase in which hydration was the second-largest source of heat released during the reaction of cement with water [2,32]. Crystallization seeds' action in cements made from different clinkers was similar due to the specific nature of this activator and the introduction of additional nuclei of crystallization around which the formation of hydration products can begin. Calcium nitrate worked better in cements with higher GGBFS content and in cements made from clinkers with a lower C 3 A content and a higher C 2 S content. For CEM III/A manufactured with clinkers C1, C2, calcium nitrate changed the course of the heat release graph. It might be caused by the overdosage of this admixture [3,7,11]. However, no correlation could be seen between the effectiveness of this activator and the content of the C 3 S phase in which it should also act, considering the data in the literature [3,7]. Sodium hydroxide completely changed the hydration process. Therefore, it was not used at 8 • C. In this research, the beneficial influence of CKD was observed by the small amount of dust added and its important specific surface area. However, it is necessary to recall the different actions of various CKDs [49]. Conclusions The main contribution of this paper is to assess and compare different possibilities of cement setting acceleration. Factors that shorten the setting time can be ranked. The most effective is to increase the temperature of the ingredients and the environment; the second is to reduce the GGBFS content in cement and use accelerators. The least effective is the additional grinding of the Portland clinker. To accelerate cement setting, in most cases, additional clinker grinding, which is a very energy-intensive process [75], can be successfully replaced by the use of setting accelerators. Accelerators were not sufficient to compensate for the temperature difference. Setting activators worked better with cement with lower GGBFS content, leading to the conclusion that these activators affect mainly Portland clinker, at least in the initial phase of cement hydration. The efficiency depended on the phase composition of the Portland clinker. The effect of changing the characteristics of the evolution of the hydration heat using nucleation seeds was similar, irrespective of the cement. However, the effectiveness seemed slightly higher for cement that contained less C 3 A. The effectiveness of calcium nitrate depended on the type of clinker and the GGBFS content in the cement. This research focused on the hydration heat, but more research on amounts of the C-S-H phase and portlandite should be conducted to assess the hydration rate. XRD and TGA-DTG methods for different ages of blended cement pastes may be involved.
2022-04-14T15:21:54.832Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "0369122f84b92fe32ec3ddaa037718137c030731", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/8/2797/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3adb5545ebd39dca39690f76482d71c2e2bb9632", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
246725964
pes2o/s2orc
v3-fos-license
National Monitoring of Veterinary-Dispensed Antimicrobials for Use on Pig Farms in Austria: 2015–2020 Antimicrobial use in livestock production systems is increasingly scrutinised by consumers, stakeholders, and the veterinary profession. In Austria, veterinarians dispensing antimicrobials for use in food-producing animals have been required to report these drugs since 2015. Here, we describe the national monitoring systems and the results obtained for Austrian pig production over a six-year period. Antimicrobial dispensing is described using the mass-based metric, milligrams per population correction unit (mg/PCU) and the dose-based metric, Defined Daily Dose (DDDvet) per year and divided into the European Medicines Agency’s prudent use categories. Pig production was divided into breeding units, fattening farms, farrow-to-finish farms, and piglet-rearing systems. Over all six years and all pig production systems, the mean amount of antimicrobials dispensed was 71.6 mg/PCU or 2.2 DDDvet per year. Piglet-rearing systems were found to have the highest levels of antimicrobial dispensing in DDDvet, as well as the largest proportion of Category B antimicrobials, including polymyxins. Although progress has been made in promoting a more prudent use of antimicrobials in veterinary medicine in Austria, further steps need to be taken to proactively improve animal health and prevent disease to reduce the need for antimicrobials, particularly those critically important for human medicine, in the future. Introduction Globally, antimicrobial use in agriculture, particularly in food-producing animals, is increasingly seen critically by consumers [1]. Although the use of antimicrobials as growth promoters has been banned in the European Union (EU) since 2006, these medications are often still used for disease prophylaxis, and reductions in antimicrobial use (AMU) are both possible and necessary in order to ensure their continued effectiveness against bacterial infections. From 2022, the new Veterinary Medicinal Products Regulation (2019/6) in the EU will legislate new restrictions on AMU in veterinary medicine and requires all member states to monitor and record veterinary AMU in their countries, initially in food-producing animals, but eventually (from 2029) in pets as well [2]. The excessive use of antimicrobials in pig production initially came under criticism in Denmark in the 1990s, and the country was among the first to successively ban a variety of antimicrobials as growth promoters from 1995 onwards [1,3]. Denmark also led the way in benchmarking pig producers and introducing penalty schemes, such as the yellow card for excessive antimicrobial use in 2010 [3]. A number of other European countries, such as the Netherlands, also began to document their veterinary antimicrobial use, and the first EU report of veterinary antimicrobial sales (the ESVAC report) was published by the European Medicines Agency in 2011, using sales data from nine countries [4]. The Austrian health authorities began contributing data on veterinary antimicrobial sales from pharmaceutical companies/wholesale pharmacies to the European Union's annual ESVAC report in 2010. To date, reported sales of veterinary antimicrobial drugs in Austria for food-producing animals have ranged from a maximum of 63 mg/population correction unit (PCU) in 2010 to a minimum of 42.6 mg/PCU in 2019 [5,6]. Since 2015, it has been required by local law for all veterinarians in Austria who dispense antimicrobials for use in food-producing animals to annually report the amounts dispensed to the relevant authorities [7]. In addition, antimicrobials are only available from veterinarians, and, since 2005, injectable (as well as intramammary and intrauterine) antimicrobials have been further restricted and can only be dispensed to farmers who are members of the Austrian Animal Health Service (Tiergesundheitsdienst, TGD) and have completed training courses in the use and administration of veterinary medications [8,9]. Antimicrobials administered directly by the veterinarians themselves do not currently have to be reported, although their use is documented in both veterinary practice and on-farm records [7]. Pig production in Austria is not an extremely large industry, when compared internationally. The average herd size is 133 head (ranging from 15,950 holdings keeping only 1-3 pigs to 12 units with more than 3000 pigs) [10,11]. Based on official data available with respect to the reference day of 1st June each year, the pig population included here ranged from a minimum of 2,773,225 pigs in 2019 to a maximum of 2,845,451 in 2015 (mean number of pigs from 2015-2020: 2,802,433; median: 2,799,632) [10]. Pig producers are primarily located in the federal states of Upper Austria (39.8% of total pig numbers in 2020), Lower Austria (27.0%), and Styria (26.8%) [10]. The most recent national data records in metric tonnes in 2020 reported that 73.4% of all veterinary antimicrobials dispensed in Austria were for use in pigs (ranging from 71.8-76.4% between 2016 to 2020), compared to 19.7% in cattle (beef and dairy) and 6.7% in poultry [12]. However, when comparing these figures to other countries, it is important to note that the Austrian national-monitoring system currently only includes antimicrobials dispensed by veterinarians to farmers and does not include those administered by the veterinarians themselves. The data presented here represent the results of the national monitoring of veterinary antimicrobials dispensed between 2015 and 2020. To allow comparison with other countries and systems, the data analysis focuses on using international metrics, such as mg/PCU (population corrected unit) and Defined Daily Doses (DDDvet), as published by the European Medicines Agency and recommended by European expert groups [13,14]. Study Population Pig production in Austria is divided into farrow-to-finish farms, fattening farms, breeding units, and piglet-rearing units. Figure 1 shows the proportions of the different pig production systems in the study population over the years included here. The study population (i.e., farms where antimicrobials were dispensed and reported to the authorities by herd veterinarians) covers between 81% (in 2015) and 87% (in 2020) of the total national pig production. Data are provided in standardised livestock units, as defined by the Austrian Ministry of Agriculture [15]. The vast majority of pigs included in this study population were kept in fattening and farrow-to-finish units (mean: 351,261 and 310,933 LSU; median 348,398 and 315,147 LSU, respectively). An extremely small number of pigs are reared in piglet-rearing systems (mean: 7809 LSU; median 7450 LSU) ( Figure 1). Mass-Based Metrics (mg/PCU) All veterinarians treating farm animals and dispensing antimicrobials to farmers for use in such animals are required by Austrian law to report their annual dispensed amounts [7]. The data included here are taken from these national records of annual antimicrobial monitoring between 2015 and 2020 [12]. Table 1 shows the proportions of antimicrobial dispensing by veterinarians for use in the various pig production systems. The vast majority of antimicrobial dispensing in mg/PCU over all six years was for use in farrow-to-finish and fattening farms. It is important to note that the decreasing proportion of pig production units that were "not assignable" to a specific production system has fallen dramatically (from 4.6% to 0.8%) since the monitoring system was first initiated in 2015. This is primarily due to improvements to the electronic-reporting system and the data-plausibility checks now in place. A variety of antimicrobial monitoring guidelines and recommendations suggests the use of dose-based metrics, such as the European Medicines Agency's DDDvet, to allow for divergences in dosing to be accounted for within AMU records [14,16]. Recording antimicrobial dispensing in mg/PCU often leads to an overestimation of some antimicrobials and an underestimation of others [16,17]. Figure 3 demonstrates the proportions of the total antimicrobial-dispensing data collected in 2020 when analysed by mg/PCU or DDDvet. The differences between tetracyclines in mg/PCU (59.6% of all antimicrobials dispensed) compared to around 43.8% of all dispensed DDDvet are particularly striking. By contrast, aminoglycosides make up 8.3% of antimicrobials dispensed by DDDvet compared to just 1.9% by mg/PCU, and polymyxins make up a much higher proportion of overall use (9.5%) by DDDvet compared to under 5% as mg/PCU ( Figure 3). When antimicrobial dispensing is presented by the proportion of DDDvet per year for the entire monitoring period (see Table 2), tetracyclines continue to make up the largest proportion each year (ranging from a maximum of 50.43% in 2018 to a minimum 39.77% in 2019). Extended-spectrum penicillins make up a much lower proportion (between 13.72-15.54%) and remain in second place over the study period, while polymyxins and macrolides alternate for the third most frequently dispensed antimicrobials. By contrast, when antimicrobial dispensing is presented by proportion of mg/PCU, although tetracyclines continue to make up the vast majority of antimicrobial use (generally > 60%), polymyxins have fallen to fifth place and make up only 2.77% to 4.19% of antimicrobial dispensing (compared to a much higher proportion of between 6.93-9.51% when analysed by DDDvet/year) ( Table 3). Antimicrobials of Critical Importance to Human Medicine Antimicrobial dispensing presented here is divided into categories as defined by the European Medicines Agency's Antimicrobial Expert Group (AMEG) [18,19]. Category A is not included, as antimicrobials in this category are not licensed for use in veterinary medicine in the EU (although they may be used off-label in nonfood-producing animal species). Categories B and C are critically important for human medicine and should be used restrictively (Category B: 3rd and 4th generation cephalosporins, fluoroquinolones, and polymyxins) or with caution (Category C includes e.g., macrolides, extended-spectrum penicillins, amongst others). Category D antimicrobials should be used prudently and include tetracyclines, sulfonamide/trimethoprim, beta-lactamase-sensitive penicillins, etc. With the exception of macrolides, Category B ("restrict") antimicrobials are comparable to the WHO's highest-priority, critically important antimicrobials (HPCIA) [18,19]. Further details are provided in the Section 5. In all production systems, the majority of antimicrobials dispensed were in Category D, with the exception of piglet-rearing units, where a substantial proportion of antimicrobials dispensed were in Category B. For details, see Figure 4 and Sections 2.5 and 2.8 below. Again, the differences in mass-based versus dose-based metrics became apparent and can be seen very clearly when comparing Figure 4 (mg/PCU) with Figure 6d (DDDvet for piglet-rearing systems). Route of Administration for the Dispensed Antimicrobials As would be expected, the vast majority of antimicrobials dispensed in all categories for use in Austrian pig production were for oral administration. Category D antimicrobials for oral use ranged from 53 mg/PCU in 2019 to around 66 mg/PCU in 2018, as shown in Figure 5. By mg/PCU, the most frequently dispensed antimicrobial class for oral use in Category D ("prudent use") were tetracyclines (37. Figure S1). Antimicrobial Use on Piglet production/Breeding Units Breeding (piglet production) units made up approximately 20.5% of pig-producing units in Austria from 2015-2020, on average, ranging from 19.6% to 21.3% of pig production by LSU. Antimicrobial use on breeding pig units is shown in Figure 6a. The mean number of pigs kept on breeding units was 173,251 LSU. Discussion The data presented here provide a comprehensive overview of veterinary antimicrobial dispensing for use on Austrian pig farms over a six-year period. Given the mandatory nature of reporting and the fact that data were provided for between 81-87% of national pig production in Austria, these analyses can be considered an accurate representation of antimicrobial dispensing for use in pig production in the country. Nevertheless, it is important to note that antimicrobials administered directly by veterinarians themselves (rather than dispensed to farmers), while no doubt making up a small proportion of antimicrobial use in pig production overall, were not included in this dataset. The most recent data available on total antimicrobial dispensing for all pig production systems in Austria were calculated to be 68.8 mg/PCU. (NB. 1 PCU is approximately equivalent to 1 kg livestock biomass). These figures are comparable with antimicrobial sales reported in a study of veterinary wholesale data in Switzerland in 2015 (77.4 mg/kg) [20], but are higher than those previously reported for a small convenience sample of 75 pig farms in Austria (mean over four years: 33.9 mg/kg) [21]. By contrast, the Austrian national figures are much lower than those recently reported for 67 Irish pig farms (161.9 mg/PCU) or the UK figures for the national pig herd in 2020 (namely 105 mg/kg) [22,23]. With respect to Defined Daily Doses (DDDvet), the mean value of the six-year median DDDvet per year (2.2 DDDvet/year) reported here and covering all pig production systems is difficult to compare with other dose-based metrics, as calculation methods vary. A recent study in Italy (using national DDD metrics) reported annual median values of between 6.24-7.57 DDDita/100 kg on 36 fattening farms [24], which is substantially higher than the Austrian national mean of the six-year median value of 2.17 DDDvet for fattening farms determined here. Meanwhile, a Swiss study of 227 pig farms reported a mean treatment of 4 DDDvet over a one-year period [25], which is also higher than that reported here in Austria. When analysing antimicrobial use by substance, the Austrian data show that tetracyclines are dispensed in the greatest volumes by mass. However, it is important to note that mass-based calculations are often skewed with respect to older antimicrobial molecules which have higher dosage requirements in mg/kg than other newer drugs which may be more potent [14,16,26]. Oxytetracycline, for example, is licensed for use in pigs in Austria at a dosage of 40 mg/kg/d, which leads to a requirement of 2000 mg per day for a 50 kg pig. In contrast, the polymyxin, colistin, licensed at a dosage of 5 mg/kg/d, leads to a requirement of 250 mg per day for the same pig. This means that when comparing these antimicrobial drugs using mass-based metrics, oxytetracycline appears to be used at an eight-fold higher amount than colistin, which skews the overall proportions of antimicrobial classes in mg/kg. These discrepancies can be balanced out by using the defined daily dose (DDDvet), which refers to the daily dose as a whole, regardless of the amount of antimicrobial drug administered in milligrams. For this reason, a comparison using dose-based metrics is essential [16]. Nevertheless, even when analysed by DDDvet metrics, tetracyclines still made up the majority (>55%) of antimicrobials dispensed for use in pig production in Austria between 2015-2020. Other studies have also reported that tetracyclines and penicillins are the most commonly used antimicrobials in pig production, such as a systematic review of 36 international papers [27] and a survey of 36 finishing pig farms in Italy [24]. In 2016, an Irish study of 67 farms, as well as Danish national reporting data, both demonstrated that tetracyclines were most frequently used [22,28], and similar findings have also been reported more recently from Japan [29]. The vast majority of tetracycline use in all these studies, as well as in the Austrian data presented here, was for oral administration. Whilst we do not have access to diagnoses data in Austria, tetracyclines are known to be commonly used for the treatment of gastrointestinal disorders and respiratory disease in pigs of all ages. Although tetracyclines are categorised by the EMA as the lowest level of caution (Category D, prudent use), some countries, such as Denmark, have seen increasing levels of antimicrobial resistance to this antibiotic and are now taking measures to reduce its routine use in pigs [28,30]. Similar resistance patterns have also been reported in studies in Austria, where tetracycline resistance was reported among 66% of Streptococcus suis isolates (increasing up to 88% of Sc. suis isolates obtained from joints) and 67.7% of Escherichia coli isolates obtained from piglets with diarrhoea [31,32]. Among piglet-rearing (and, to a much lesser extent, breeding) farms, a large proportion of antimicrobial dispensing was made up of polymyxins. This antimicrobial class contains the drug, colistin, which is commonly used to treat gastrointestinal disorders in young piglets (both pre-and post-weaning age), particularly disease caused by enterotoxigenic Escherichia coli (ETEC). While it is important to note that piglet-rearing farms make up only a very small proportion of Austrian pig producers (namely a mean number of pigs equivalent to 7809 LSU and between 0.8-1.5% of total antimicrobials dispensed for use in pigs by mg/PCU), polymyxins still made up a relatively large proportion (up to 9% by DDDvet, the third most frequently dispensed class in 2020) of antimicrobials dispensed in Austria overall. Polymyxins are classified by the European Medicines Agency as Category B antimicrobials, the use of which should be restricted as much as possible. Some countries, such as the UK and Denmark, have recently managed to avoid their use altogether among pig producers [23,28]. Although the most recent European Sales of Veterinary Antimicrobial Agents (ESVAC) report in 2021 stated that polymyxin use had fallen by 77% in 31 European countries since 2011, they are still sold at a higher level (based on mg/PCU metrics) in Germany, Poland, Hungary, Portugal, and Cyprus than in Austria [6]. The Netherlands has also reported a 7.3% increase in the use of colistin in all livestock production in 2020 and, as seen in the Austrian data, the vast majority of this use (91% of pig use) was for weaners [33]. Since plasmid-mediated colistin resistance was first detected in China in 2013, and the subsequent discovery of this resistance gene among pigs and humans throughout the world, recommendations have been made to reduce the use of this antimicrobial in livestock production wherever possible [34][35][36]. As would be expected, and as reported in many other studies [27,37,38], given the primarily intensive nature of pig production, the vast majority of antimicrobials were dispensed for oral administration. Systemically administered antimicrobials are generally used for the treatment of individual animals rather than entire groups and were dispensed at a very low level. Category D antimicrobials made up the largest proportion of antimicrobials dispensed for use by injection, namely 2.6 to 2.9 mg/PCU (compared to 53-66 mg/PCU for oral use). While dispensed at a much lower level than Category D antimicrobials for oral administration, Category B antimicrobials (including colistin) were more commonly dispensed for oral rather than systemic treatment, which is particularly concerning as a previous Austrian study of 75 pig farms demonstrated that oral treatments are frequently (in 75% of cases) underdosed and only 8% of cases were correctly dosed [21]. Furthermore, a number of studies have reported that the risk of antimicrobial resistance is substantially higher following oral antimicrobial treatment rather than parenteral administration of such drugs, and the European Medicines Agency also classes oral treatment, particular as a group treatment, to be the least preferable route of antimicrobial administration [19,39]. The data presented here have demonstrated that antimicrobials dispensed for use on pig units with a high number of young piglets make up the highest proportion of Category B antimicrobials, drugs which should be limited to restricted use. Here, it is particularly important for herd veterinarians to work together with pig producers to attempt to prevent disease, such as post-weaning diarrhoea, by improving hygiene and biosecurity, reducing stress, and vaccinating either breeding sows or young piglets whenever possible [40]. Given that colistin is critically important for human health (as the first-line drug for carbapenemase-producing Enterobacteriaceae infections), is primarily administered orally to pigs, and colistin-resistant bacteria have been isolated from wastewater from pig slaughterhouses in Germany, the use of this antimicrobial substance is an extremely relevant example of an essential One Health drug affecting human, animal, and environmental health [36,41,42]. For this reason, Austrian pig producers should attempt to learn from pig producers in other countries, where the use of colistin has been considerably reduced or stopped completely. The implementation of the new EU Regulation 2019/6 will bring a number of changes to the use of veterinary antimicrobials in Austrian and European livestock production as a whole. Prophylactic use of antimicrobials will no longer be permitted, and only the metaphylaxis of a group will be allowed when one or more animal is proven to be infected. It is expected that the restrictions on the use of Category B antimicrobials will be tightened and enforced. For this reason, Austrian pig producers and their herd veterinarians will need to alter their antimicrobial use towards a more prudent use of these essential drugs in the future. Conclusions Based on mandatory veterinary reporting, antimicrobial dispensing in the pig sector in Austria has not decreased over the past six-year period. While the vast majority of antimicrobials dispensed are in the EU's least restrictive Category D, an alarming proportion of Category B antimicrobials (primarily polymyxins, namely colistin) are dispensed for use in young piglets. National-benchmarking schemes are already in place for herd veterinarians and are currently being rolled out to individual pig producers. In future, partly due to new EU legislation, changes will need to be made to improve pig health and prudent antimicrobial use in this sector. Materials and Methods In Austria, pharmaceutical companies, marketing authorisation holders (distributors), and pharmaceutical wholesalers are required by law to provide the authorities with details of the sales of veterinary drugs containing antimicrobials. Additionally, veterinarians with in-house pharmacies must also report the quantities of antibiotics that are dispensed for use in food-producing animals for each farm and livestock species. The legal basis for the collection of these data is the "Veterinary Antibiotics Volumetric Flows Regulation" (Veterinär-Antibiotika Mengenströme Verordnung), which was enacted in 2014 [7]. Pig Population Data The number of animals reared on each farm, as well as animal movement and official veterinary authority data, and numbers of animals slaughtered were available from the official veterinary database, namely the "Veterinary Information System (VIS)". Each farm was categorised into one farm type using the reported "production system type", and the number of pigs in each category (piglets, fattening pigs, breeding sows/boars) are from the VIS database. The categorisation was taken from official records and can broadly be defined as follows. Breeding units refers to farms where sows (and sometimes boars) are kept to produce piglets for sale (it is not known at the veterinary authority level whether these piglets then go on to fattening farms or piglet rearing units). Fattening farms rear grower/finisher pigs from 20-32 kg liveweight up to slaughter. Pigletrearing units keep piglets from weaning (i.e., the sows are not present on this type of farm) until the beginning of the fattening period (approx. 20-32 kg). As the name would suggest, farrow-to-finish farms rear piglets from birth to slaughter. Antimicrobial Use Data Veterinarians with in-house practice pharmacies are required to provide the amount of dispensed antimicrobials for each marketing authorisation identification number (i.e., each licensed pharmaceutical product) for each farm and livestock species. This is used to calculate the total metric tonnes dispensed of each antimicrobial active ingredient each year. This metric was then converted into mg/PCU for pigs using the standardised method used by the Austrian authorities for the entire national pig herd and described for national reporting for the European Union's ESVAC report [6]. The standardised weight of a slaughtered pig as part of the PCU calculation is 65 kg; further details on the calculation of the PCU are provided elsewhere [6]. Furthermore, the number of Defined Daily Doses (DDDvet) for each antimicrobial substance, as defined by the European Medicines Agency, for the treatment of pigs was calculated as follows. The total number of milligrams of active ingredient dispensed for each antimicrobial substance was divided by the number of DDDvet for that antimicrobial substance with respect to pigs and the route of administration [13] to obtain the potential total number of Defined Daily Doses (DDDvet) for 1 kg animal biomass. To calculate the number of DDDvet per year, the following formula was used: DDDvet per year = Total annual number of DDDvet per 1 kg biomass Herd size of breeding animals (if present) + No. animals moved/slaughtered (in kg) for that year Livestock numbers were estimated based on the number of reported animals on the farm combined with animal movement and slaughter data. To ensure uniformity, livestock numbers were converted into the Austrian Ministry of Agriculture's livestock units (LSU), e.g., piglets and weaners (up to 20 kg liveweight) are classified as 0.07 LSU, growers and young boars/sows (up to 50 kg liveweight) as 0.15 LSU, and breeding boars/sows as 0.30 LSU [15]. The data were also divided by route of drug administration, such as systemic or oral application, as well as by production group. Classification into Prudent Use Categories In addition, data were divided into groups based on the European Medicines Agency's classifications of B (restrict use), C (use with caution), and D (use prudently), as well as according to the World Health Organization category of "highest priority critically important antimicrobials" (HPCIAs) [18,43]. For details, see Table 4. (NB. The EMA classification A (avoid) was not included as it does not list any antimicrobial substances licensed for use in food-producing animals). Statistical Analyses All statistical analyses were carried out using the statistical programming language R [44]. The data were prepared and plots were created using the tidyverse package [45].
2022-02-11T16:23:13.313Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "feb995ec8d13245bc8940a30a38873666d87a1d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/11/2/216/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f35ee03d6f810e3e6c1578ebd18f92ef529492a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
252161263
pes2o/s2orc
v3-fos-license
Computed tomography findings of COVID-19 in pediatric patients Background. In this study, we aimed to evaluate the thorax Computed Tomography (CT) findings of pediatric patients diagnosed with coronavirus disease-19 (COVID-19) and to discuss these findings in light of the results of adult patients from the literature. Methods. The CT scans of pediatric patients (1-18 years old) with a diagnosis of COVID-19 by reverse transcriptase-polymerase chain reaction (RT-PCR) in our hospital between March 2020 and January 2021 were retrospectively reviewed. The scans were interpreted regarding the distribution and localization features, and involvement patterns including ground-glass opacity, consolidation, halo/reversed halo sign, interlobular septal thickening, air bronchograms and bronchiectasis. The frequencies of these findings in pediatric cases in our study were recorded. Results. A total of 95 patients with a mean age of 13±4.6 years were included in this study. Among them, 34 (36%) had lesions associated with COVID-19 on CT scans. Bilateral involvement was detected in 15 (44%) while unilateral in 19 (56%) patients. Eighteen (53%) patients had single lobe involvement. In 16 (47%) patients a solitary lesion was detected and in 18 (53%) multiple lesions were present. Ground-glass opacity appearance was observed in 28 (82%), consolidation in 9 (26%), and ground-glass opacity with consolidation in 8 (24%), halo sign in 9 (26%), reversed halo sign in 2 (6%), interlobular septal thickening (interstitial thickening) in 1 (3%) patients. Conclusions. As symptoms are relatively milder in children with COVID-19, CT findings are less extensive than in adults. It is essential to know the thorax CT findings that aid in the diagnosis and follow-up of the disease. The number of pediatric COVID-19 patients is growing significantly. Although most children with COVID-19 have had mild clinical symptoms, cases with severe signs of disease or even death have been reported. 1,2 Early diagnosis of the infection in children is important, as they also increase the transmission risk. Thus, the Computed Tomography (CT) scans of children with a suspicion of pneumonia must be carefully evaluated to both protect children and prevent spreading. 1 The main diagnostic and screening tool for COVID-19 pneumonia is reverse transcriptase polymerase chain reaction (RT-PCR). However, the accuracy of the test depends on the quality of the throat swab and the viral load. Radiological features are very important for the diagnosis of COVID-19, and thorax CT can also detect pulmonary findings suggestive of COVID-19 infection even in patients who have initial negative RT-PCR results. 3 While knowledge about the clinical and epidemiological features of COVID-19 in children is rapidly increasing, large studies of radiological findings are still lacking. In this study, we aimed to present the radiological features of COVID-19 in children. Material and Methods Thorax CT scans of pediatric patients who were admitted to our hospital between March 2020-January 2021 and diagnosed with COVID-19 by RT-PCR were retrospectively reviewed using the hospital database. CT examinations were performed with a 16-slice CT device (Toshiba Alexion 16) in a 3mm slice thickness in the supine position and the appropriate thoracic protocol (kVp: 100-120, mAs: 50-100). An intravenous or oral contrast agent was not administered to the patients. The images were sent to a workstation (Syngovia Siemens Medical System, Siemens/Germany) and evaluated in both the mediastinal and parenchymal windows in all three planes (axial, sagittal, and coronal). All the images were evaluated independently by two radiologists with 19 and 9 years of experience in thoracic imaging who were blinded to the each others interpretations. The final decisions were reached with consensus in cases of conflict. The demographic characteristics of the patients including age, gender, and CT features of the lesions were examined. The location of the lesions was classified as unilateral-bilateral, anteriorposterior, central (parenchymal areas adjacent to the hilus)-peripheral (parenchymal areas close to the pleura) according to parenchymal involvement. The distribution of the lesions was classified as the right upper, middle, lower and left upper and lower lobe. The number of lobes affected was also examined. The morphological characteristics of the ground-glass opacities were classified as patchy or nodular forms. The morphological structure of the consolidation areas was described as round, linear or irregular. Interlobular septal thickening ("crazy paving"), tree-in-bud appearance, air bronchograms, bronchial wall thickening, bronchiectasis, air bubble, reversed halo sign, halo sign, nodule, linear atelectasis, pleural thickening, pleural effusion, pericardial effusion, and mediastinal lymphadenopathy (lymph node short axis dimension >10mm) were identified as present or absent. [4][5][6] Statistical analysis was performed using the SPSS program (IBM SPSS Statistics for Windows Version 21.0. Armonk, NY: IBM Corp, USA). The demographic variables were expressed as mean±standard deviation. Other variables were presented as number (N) and percentage (%). A Cohen's kappa coefficient (κ) was calculated to assess the interobserver agreement between the two radiologists who interpreted the CT scans. This study was approved by Kırşehir Ahi Evran University Ethics Committee on 09/02/2021 with the number of 2021-03 / 28. The lesions were located only in the peripheral regions of the lungs in 29 (85%) and both in the peripheral and in the central regions in 5 (15%) patients. There was no patient with only a central lesion in this study. A single lesion was detected in 16 (47%) and multiple lesions in 18 (53%) patients. Among the 95 patients with CT scans, 42 had a prior chest X-ray. The X-rays were interpreted Bronchial wall thickening 3 9 Air bubble sign 1 3 Vascular enlargement 2 6 Interlobular septal thickening 1 3 Halo sign 9 26 Reversed halo sign 2 6 Tree-in-bud sign 3 9 Nodule 9 26 Linear Atelectasis 2 6 Pleural effusion 0 0 Pericardial Effusion 1 3 Lymphadenopathy 0 0 as normal in 38 of these 42 patients. Focal radiopacities compatible with consolidation were observed in 4 patients (Fig. 18). As a result, focal consolidation was the only pathological finding detected on X-ray of the patients. No other pathological signs were detected on chest X-rays. Pathological findings were also seen on CT scans in all these 4 patients while there was no patient with signs on X-ray who did not have a pathological finding on CT. Discussion COVID-19 disease can cause a highly contagious acute infectious pneumonia caused by SARS-CoV-2 that can be transmitted by an infected patient or an asymptomatic carrier. Since most pediatric patients are asymptomatic, they have a critical role in the spread of the disease. In pediatric cases with mild clinical symptoms, usually, a plain chest X-ray does not provide sufficient information, which leads to misdiagnosis. In some children with negative COVID-19 RT-PCR tests and clinical findings, especially in the initial stages of the disease, a thorax CT examination may be very useful for the diagnosis. 7 When compared with adult COVID-19 patients, unilateral involvement is higher in pediatric patients. 7,8 Involvement rates of the lower lobes are high in pediatric cases. 8,9 In our study, consistent with the literature, lower lobe dominance was present while left lower lobe involvement was the most common. Chen et al. 10 stated that pediatric patients tended to have less extensive involvement than adult patients in their study. Sharon et al. 11 suggested that lesions in pediatric patients predominantly involved 1 or 2 lobes. In a study by Palabiyik et al. 4 single lobe involvement was dominant in pediatric patients as well as unilateral lung involvement. Similar to our data, the number of patients with multiple lesions was found to be higher than with a single lesion. 12 However, our relatively high rate of single lesions may be due to less severity of the involvement or to the early phase of the disease at the time of the CT scan. While parenchymal lung lesions are often located peripherally, both peripheral and central lesions may be seen in the same patient. However, only central involvement is very rare 13,14 as in our study. In a meta-analysis by Katal et al. 1 isolated ground-glass opacity, consolidation, and the concomitance of ground-glass opacity with consolidation were suggested as the most common findings in children with COVID-19. The most common radiological finding detected in COVID-19 pneumonia is groundglass opacity. In our study, consistent with the literature, ground-glass opacities were smaller in size with a lower density and they were less diffuse in pediatric patients when compared to adults. Also, interlobular septal thickening less frequently accompanies ground-glass opacities in children. 15,16 Consolidation is common in COVID-19 pneumonia as it can be seen as an isolated finding but concurrence of consolidation and ground-glass opacity may be seen frequently. They become prominent in the peak period of the disease, especially in the posterior and peripheral aspects of the lower lobes. It can be morphologically round, linear, and irregular in shape, and may be accompanied by air bronchograms. 6,7,17,18 In our study, round-shaped consolidation areas were predominant while linear and irregular consolidations frequently accompanied round consolidations. Halo sign is termed as the presence of a surrounding ground-glass opacity around a nodule or mass. 19 The halo sign is more common in pediatric COVID-19 cases compared to adult patients. 20 In a study by Xia et al. 7 they stated that the halo sign surrounding the consolidations was observed at a rate of 50% and this finding could be considered specific for pediatric patients. In our study, the frequency of halo sign associated with consolidation areas and nodules was found as 9%. Interlobular septal thickening refers to the collection of inflammatory cells in the interstitium. In COVID-19 pneumonia, it can be isolated or accompanied by groundglass opacities and consolidation areas ("crazy paving"). This is one of the common findings of COVID-19 pneumonia in adults and it was less frequent in children in our study when compared with studies on adults in the literature. [21][22][23] Reversed halo sign is one of the atypical findings of COVID-19 pneumonia and is rare in children. 11 Bronchial wall thickening due to airway changes is a finding reflecting the severity of the disease. 24 In a study with 10 pediatric COVID-19 cases, Tan et al. 25 found bronchial wall thickening in 1 patient. Bronchiectasis may occur due to volume loss during the organization of consolidation areas. 17 The air bubble sign can refer to the pathological expansion of physiological air space in the parenchyma of the lung or the rounded appearance of existing bronchiectasis or the air gaps formed during the resorption of the collapsed structures. 26 Vascular enlargement is described as an increase in the size of subsegmental pulmonary vessels (> 3 mm), especially in lung areas where parenchymal involvement is more prominent. In patients with COVID-19 pneumonia, these findings may be related to the damage and thickening of the vascular wall structures caused by inflammatory processes. 27 We could not find detailed data on bronchiectasis, air bubbles, and vascular enlargement in pediatric COVID-19 patients. In our study, we detected bronchiectases in 2 (6%), air bubble finding in one (3%), and vascular enlargement in 2 (6%) patients. Nodules are round or irregular opacities less than 3 cm in diameter, well or poorly circumscribed, and are often associated with viral pneumonia. Its' incidence in children with COVID-19, is slightly higher than in adult patients. 7,19,28 In our study, 9 (26%) patients had mostly well-circumscribed subpleural and perivascular nodules with a mean diameter of 4.6 mm ± 1.3 (3-7 mm). The tree-in-bud finding, which is usually an indicator of small airway disease, is one of the atypical findings of COVID-19. These lesions should raise a suspicion of the presence of bacterial or viral co-infection and patients should be evaluated in this respect. 26 In the study of Xia et al. 7 they concluded that co-infection is seen frequently in pediatric COVID-19 patients (40%). In our study, 3 (9%) patients had nodular infiltration areas. However, we did not have additional sufficient evidence to support the presence of co-infection in these patients. The incidence of pericardial effusion of adult COVID-19 cases on CT images is approximately 5% 20,29 , but we could not find data about the frequency in pediatric COVID-19 cases. In our study, one patient had pericardial fluid. Pleural fluid and lymphadenopathy in pediatric COVID-19 patients were rarely reported in previous studies. 30 Also, pleural fluid and lymphadenopathy were not found in our study. The clinicians and radiologists should be in consensus on the evaluation of pediatric patients regarding the sensitivity and specificity of CT, the accuracy of RT-PCR tests, and radiation exposure. It should be kept in mind that CT findings in COVID-19 are not specific and may occur in various diseases such as other viral or atypical pneumonia, hypersensitivity pneumonia, eosinophilic lung diseases. Also, a relatively lower positive predictive value is another potential limitation of CT especially in some regions with a low COVID-19 prevalence. 5,31 In the new guidelines by the North American Radiology Association (RSNA), the RT-PCR test is suggested as the first method to be used for the diagnosis of COVID-19 in children. They noted that imaging is not indicated unless the patient has potential risk factors or a progression in clinical symptoms. 32 In the study by Ma et al. 23 they stated that a significant improvement of the lesions was observed in most of the pediatric cases on follow-up images. Therefore, control CT scans should be obtained when they are only necessary, considering the clinical changes in the patients. A guideline by the North American Radiology Association (RSNA) 32 , recommends bidirectional (posterior-anterior and lateral) chest radiography for follow-up pediatric patients with COVID-19. The primary limitations of the study are its retrospective nature and the relatively small patient number. Also, we could not exclude the presence of a bacterial or viral co-infection in some patients with suspicious CT findings such as nodular infiltrations. In conlusion the number of pediatric COVID-19 cases is gradually increasing. There are some differences in the thoracic CT features of COVID-19 in children compared to adults. Awareness of CT findings of COVID-19 in children is important for both rapid isolation and control of the disease. The use of CT for the follow-up the pediatric COVID-19 patients must be limited because of the high radiation dose. Ethical approval This study was approved by Kırşehir Ahi Evran University Ethics Committee on 09/02/2021 with the number of 2021-03 / 28.
2022-09-10T06:17:27.613Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "9c5d249d1e3be940c13aae96e97d45d0b1777381", "oa_license": null, "oa_url": "http://www.turkishjournalpediatrics.org/pdf.php?&id=2468", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "701845788a531f386cff3c1fe4eb0fb439d1772c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253005511
pes2o/s2orc
v3-fos-license
VALIDATION OF DIVERSE INFORMATION TO PREPARE AN INTERACTIVE BOOKLET FOR FAMILIES OF AUTISTIC CHILDREN Objective: to validate content and face of diverse information to prepare an interactive booklet for families of children with Autism Spectrum Disorder. Method: a methodological research study conducted in Curitiba-PR, Brazil, with 86 experts between June and November 2020 and using the Delphi technique. An instrument with variables about relevance of the content, clarity and objectivity and scientific topicality was used for data collection. In data analysis, the Content Validity Index was set at ≥75%. Results: the content about the characteristics of autistic children, their diagnosis, behavior and rights was considered valid by the experts in the first round. The diverse information about the signs in autistic children and their future were validated in the second round, after the reformulations suggested by the experts. Conclusion: this educational resource has the potential to contribute in health education for the families of autistic children. This study evaluated content and face validity of the diverse information directed to families of children with ASD together with specialists in the disorder in order to make it available as an interactive booklet. The referred content can be considered a technological innovation because, despite the availability of online informative contents, the families report poor quality, a massive amount of information, and use of technical terms that hamper its understanding 7,16 . INTRODUCTION Changes in children's health status can generate stress, concern and worries in the families. Currently, there is an increasing number of children with Autism Spectrum Disorder (ASD): the estimates show that one out of 59 children presents the disorder 1 . It is a condition that involves changes in communication, social interaction, behavior and sensory system 2 . In addition to the aspects that affect children, there are several care demands to be addressed by the family members, as they need to learn how to take care of a child with ASD, to understand all their peculiarities, and to search for specialized care 3 . Faced with ASD diagnosis, families start to request information 3 and to search for it with professionals of different areas and also from other sources, such as those freely and openly available 4 . However, the families themselves report that much of the free information is not reliable 5-6 . Access to low-quality information can lead some families to make wrong decisions 7 , raising concerns in the professionals who care for children with different profiles of chronic diseases and their families; therefore, these professionals have developed contents using different free-access dissemination media such as You Tube, podcasts and/or vodcasts, among other online resources, providing families with useful and reliable information [8][9] . These strategies can also be used for families of children with ASD; however, the information needs to be systematized by professionals with experience with this population. Given this scenario, it is indispensable that the health team promotes ways to provide informational support, turning professionals into facilitators of the process to engage the family in the care of children with ASD. Digital interventions in the health area are flourishing every day. Patients, family members and professionals alike are surrounded by digital tools; therefore, it seems natural that society starts using them more frequently. The benefits from using these resources include low cost and good accessibility for the families. Therefore, a number of studies indicate a gap in the reliability and quality of the diverse information made available 4-7 . Given the limited materials aimed at families of children with ASD 4-7 , it becomes necessary to elaborate resources for such families. Thus, elaboration of an interactive booklet for the target population was designed. However, in addition to that, it is necessary to identify what information the professionals specialized in ASD consider important to include in the interactive booklet under development, as a way to provide qualified and safe information by means of education in health. Thus, this study aimed at validating content and face of diverse information to prepare an interactive booklet for families of children with ASD. METHOD This is a methodological research study for content and face validation, conducted in Curitiba-PR, Brazil. In order to select the contents for the interactive booklet, interviews with the families and an integrative review were conducted to identify their information needs. The interviews took place between September 2018 and September 2019 and were entitled "Demand for information by the families of children with Autism Spectrum Disorder" 6 . The integrative review was conducted between September 2019 and February 2020 and was called "Information requirements by the families of children with Autism Spectrum Disorder: An integrative review" 4 . These data allowed understanding the experience of families of children with Cogitare Enferm. 2022, v27:e87458 Validation of diverse information to prepare an interactive booklet for families of autistic children Weissheimer-Kaufmann G, Mazza V de A, Ruthes VBTNM, Oliveira LF de ASD, extracting important contents to inform them and, thus, plan preparation of the information to development the booklet. Subsequently, the following informative contents were selected in the scientific literature: characteristics; diagnosis; behavior; signs; right to health; and future of children with ASD, and they were presented in the thesis entitled "Informational support for the families of autistic children: Content validation" 11 . Such being the case, this study will present the content and face validation procedure from the perspective of the experts in ASD. The participants were experts selected through their Lattes Curriculum (LC), being considered people whose knowledge allowed them to master a specific knowledge area 10 . Searches for the LC were conducted by the subject matter of "autism", Brazilian nationality, PhD academic training, and the following professional activities with the respective number (n) of invitations by professional category: social worker (14), law professional (7), physical educator (20), nurse (13), physiotherapist (21), speech-language pathologist (47), physician (23), music therapist (17), nutritionist (19), pedagogist (26), psychologist (54), and occupational therapist (39). The invitations were sent to the previously described experts who presented qualifications in the area of research, teaching and/or care, with a minimum sum of 10 points in the activities developed in the last five years (from 2015 to 2020), according to the order in which they were mentioned in the LC. The criteria were as follows: At least five years of assistance-related experience in relation to the ASD area: 5 points; and At least ten years of assistance-related experience in the area of interest: 10 points. Invitations were sent until achieving the minimum sample of six respondents for each content, according to the guidelines set forth in the Guide by the American Association of Orthopaedic Surgeons/Institute of Work and Health 12 . Data collection took place between June and November 2020. The research invitations were emailed along with a link to access the questionnaire. If no response was obtained, a new invitation was sent one week after the first one, and a third invitation was sent one more week later if lack of response persisted. A total of 247 invitations were sent in the first research round, receiving answers from 63 respondents (25%). 53 invitations were made in the second round with answers from 23 respondents (43%), totaling 86 research participants. The data collection instrument included the following experts' sociodemographic data: age (years old), gender, geographic region of residence, training, function/position in the area of professional performance, and time of professional performance (years). In addition to these data, the instrument was based on criteria for the validation of materials 13 and adapted from another study 14 which validated content and face of the booklet. Thus, the following aspects were adopted: content validation (scientific relevance and topicality) and face validation (clarity and objectivity). Eight questionnaires were created in the Typeform software (digital tool), according to the thematic categories of the diverse information prepared, in order to make them Cogitare Enferm. 2022, v27:e87458 Validation of diverse information to prepare an interactive booklet for families of autistic children Weissheimer-Kaufmann G, Mazza V de A, Ruthes VBTNM, Oliveira LF de less extensive/exhaustive and facilitate the experts' adherence to the research. The questionnaires were the following: 1-Characteristics of children with ASD; 2-Diagnosis of children with ASD; 3-Behavior of children with ASD; 4-Signs in children with ASD; 5-Right to health; 6-Right to education and work, protection against discrimination; 7-Social right; and 8-Future of children with ASD. To validate content and face, it was decided to use two levels of answer choices ("I agree" or "I disagree"), as the pre-test verified that, when increasing the number of answer choices by including "I partially disagree" and "I partially agree", no suggestions for content improvements by the experts were obtained. The "I agree" or "I disagree" options were selected so that, when the experts selected "I disagree", a questionnaire field was opened automatically to describe the reason, thus enabling the researchers to reformulate the contents. The Delphi technique was used, whose objective is to assess agreement among the experts in relation to the content evaluated. For data analysis, the experts' sociodemographic information was organized in a spreadsheet and absolute, percentage and mean frequencies were calculated. The Content Validity Index (CVI) 15 was used to determine agreement among the experts for content validation. This method allows assessing the percentage of experts that agreed or not with the content. The CVI was calculated by adding up the number of "I agree" answers and dividing it by the total number of items included in the questionnaire. All the content with CVI values equal to or greater than 75% were considered as validated, and those with CVI values <75% were reformulated and forwarded to a second validation round. Resolution No. 466/2012 of the National Health Council was followed, which approves the guidelines and regulating norms for research involving human beings. The research was submitted to a Research Ethics Committee and was approved under Opinion No. 3,312,897. RESULTS The study participants were 86 experts aged from 27 to 62 years old, with a mean of 42. Seventy-eight (91%) were female and the mean, minimum and maximum time of professional performance corresponded to 18, 1 and 42 years, respectively. Based on the following topics: characteristics, diagnosis, behavior, signs, right to health, and future of children with ASD, 52 items were created and validated, referring to the diverse information directed to the families of children with ASD that should be included in the booklet. Table 2 presents the content and face validation data about the characteristics, diagnosis and behavior of children with ASD. All these contents were validated in the first round, as can be seen in the CVI. Table 3 presents the content and face validation data regarding the signs in children with ASD. Two rounds were necessary to reach the minimum CVI of 75%, mainly in the Clarity and Objectivity aspect. Item 4.6 was validated in the first round and the experts suggested adding the contents of items 4.2 to 4.4 and of item 4.14. Table 4 presents the data referring to content validation about the rights (validated in the first round) and future of children with ASD (validated in the second round). The content about the children's future, item 6.1, was not validated in the first round. The interactive booklet is under development by the same research group, and, subsequently, it is intended to submit it to a navigation and usability test with families of children with ASD. The figure below presents the cover page corresponding to the first topic of the booklet. DISCUSSION This study evaluated content and face validity of the diverse information directed to families of children with ASD together with specialists in the disorder in order to make it available as an interactive booklet. The referred content can be considered a technological innovation because, despite the availability of online informative contents, the families report poor quality, a massive amount of information, and use of technical terms that hamper its understanding 7,16 . One of the requirements that were considered to select the contents was their relevance for the target audience. Information in the scientific literature considered important for the families was selected, such as characteristics of the disorder, ASD diagnosis, clinical signs in children, how to deal with their behavior, and their rights and future 4,6 . Validation of some items about the disorder signs and the future of children with ASD in the relevance, objectivity and clarity aspects was achieved in the second round. Likewise, validation of some items about the signs in children with ASD in the scientific topicality aspect was achieved after a number of reformulations in the second round. Relevant, timely and safe information represent a starting point for knowledge acquisition and for development of skills and competencies. This information allowed parents of children with ASD to take rapid and successful actions with regard to rehabilitation of their children and to exercise the rights of children and their parents 3 . Considering the above, it became essential to involve the professionals that monitor these families every day in the validation process corresponding to the informational content. These professionals have expertise in the field and are capable of preventing development of materials with inaccurate or tendentious results that may lead to wrong conclusions, resulting in more qualified materials that can be better used by the target population 17 . The experts participating in this study had experience in the teaching, research and care areas. These data are important because they show that the participants were skillful in theoretical judgment of the content and had care experience with the families, evidencing the potential of the material they validated regarding applicability and reliability of the diverse information for the target population. Another important requirement for the validation of materials for families were clarity and objectivity, as controversial information may represent, to parents, a source of inaccuracy and ambiguity in understanding the disorder [18][19] . A number of studies indicate that parents report difficulties understanding the written texts provided by professionals, as they normally include technical terms and lack clear information without jargons 7,20 . In addition to that, the parents reported finding a massive amount of online information, although they mentioned delay and difficulty finding information applicable to the situations they are searching for 21 . Another obstacle faced by the parents is the abundance of non-validated information available in online media. This information is often accessed by the family members, and not all of it is made available by people with expertise in the field, which makes it possible to question reliability and safety of using this information [22][23] . In view of the above, the importance of surveying updated information in the scientific literature was accentuated, obtained from scientific databases, safe sources and submitted to validation with experts in ASD, as collective judgment in the validation process adds reliability to the materials 23 . Considering the complex nature of ASD, information provision must be based on the assessment of different areas of professional performance. Therefore, the multidisciplinary composition of the group of experts who participated in this research allowed for a comprehensive and thorough evaluation, with pertinent and complementary remarks, Weissheimer-Kaufmann G, Mazza V de A, Ruthes VBTNM, Oliveira LF de which enabled to adjust contents and to qualify the material. With regard to the study limitations, as it involves informative content, periodical reviews should be conducted for the content to remain updated and be continuously used with the families, remaining focused on scientifically proven reports for the care of children with ASD. This study allowed the specialists to analyze the diverse information proposed in several aspects, in order to ease its understanding and making it practical and of highquality to the families. The current study also contributed in turning a scientific, academic and dense content into objective, clear information easily understandable information by the families and updated according to the scientific literature. Information is essential for the families, and it is up to the professionals and managers working in the health area and in other fields to organization care so that these needs are met. The information must be readily available and shared via medical, educational and other institutions. Therefore, the current research contributed as an innovation in care for nurses and other health professionals, as they will be able to use the content and the booklet in their clinical practice with patients and families. The same research group is incorporating the validated content in the format of an interactive booklet, which will be subsequently subjected to a navigation and usability test with the family members. The potential of the validated content for serving its target population is noted.
2022-10-20T16:03:13.372Z
2022-09-28T00:00:00.000
{ "year": 2022, "sha1": "fe5d77a271468f76b898cadd79fd1a506351e991", "oa_license": "CCBY", "oa_url": "https://revistas.ufpr.br/cogitare/article/download/87458/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "80df332bbdc3e30c6c6b08d92851ee9aa7b07ffd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54460718
pes2o/s2orc
v3-fos-license
Benefits of Positioning-Aided Communication Technology in High-Frequency Industrial IoT The future of industrial applications is shaped by intelligent moving IoT devices, such as flying drones, advanced factory robots, and connected vehicles, which may operate (semi-)autonomously. In these challenging scenarios, dynamic radio connectivity at high frequencies -- augmented with timely positioning-related information -- becomes instrumental to improve communication performance and facilitate efficient computation offloading. Our work reviews the main research challenges and reveals open implementation gaps in Industrial IoT (IIoT) applications that rely on location awareness and multi-connectivity in super high and extremely high frequency bands. It further conducts a rigorous numerical investigation to confirm the potential of precise device localization in the emerging IIoT systems. We focus on positioning-aided benefits made available to multi-connectivity IIoT device operation at 28 GHz, which notably improve data transfer rates, communication latency, and extent of control overhead. I. INTRODUCTION AND MOTIVATION It is predicted that the Internet of Things (IoT) market will reach 561 billion US dollars by 2022 1 . A major share of these revenues will come from the industrial IoT (IIoT) segment, which comprises manufacturing, transportation, logistics, and utilities. Overlapping with the paradigm of the Industry 4.0, the early definitions of the IIoT portray it as a collection of interconnected devices, sensors, and actuators that are capable of operating in (semi-)autonomous manner, are connected to the Internet (typically through low-power radio technologies), and are deployed in various industrial environments, such as factory floors, underground mines, excavation sites, oil and gas fields, and automation halls. More recently, the definition of the IIoT has been applied in a broader context, by encompassing a wide variety of applications in smart living, safety, surveillance, and intelligent transportation systems, including autonomous vehicles, flying drones, and industrial automated robots [1]. Among the top use cases driving the acceleration of the IIoT, the following will play an important role: predictive maintenance of machines and robots, self-optimizing large-scale production of various goods and assets, autonomous fleets of connected cars and unmanned aerial vehicles, as well as remote patient monitoring. This work was supported by the Academy of Finland (projects PRISMA and WiFiUS) and by the project TAKE-5: The 5th Evolution Take of Wireless Communication Networks, funded by Tekes. The work of the third author is supported by a personal Jorma Ollila grant from Nokia Foundation and by the Finnish Cultural Foundation. 1 The performance criteria in the emerging IIoT applications are multi-faceted. They range from conventional traffic-centric demands for large capacity and high throughput to emerging requirements of (ultra-)low latency in mission-critical applications with stringent reliability and adequate scalability. A taxonomy underlying IIoT was recently discussed in [2], where a distinct separation has been made between the 'massive' and the 'critical' pillars of this sector. Here, the former refers to a large number of interconnected low-cost devices that require appropriate scalability, while the latter envisions that the device volumes are much smaller and the focus shifts to achieving very low latency, better robustness, and high availability of uninterrupted connectivity. Accounting for the rapid growth pace of critical IIoT applications, an immediate goal of next-generation radio technology becomes the construction of comprehensive wireless connections among diverse stationary and mobile machines over extensive coverage areas. Today, this construction is pioneered by the fifth generation (5G) of cellular networks and the respective standardization is underway at full speed. Seamless connectivity support for unconstrained mobility is particularly challenging for mission-critical IIoT devices moving at various speeds over a particular geographical area, such as in industrial automation and control, remote manufacturing, intelligent transportation systems, and numerous other applications. With the above background in mind, in this paper we (i) provide a concise review of new IIoT challenges related to the use of super high and extremely high frequency communication, (ii) propose prominent positioning-related technology solutions to improve the performance of mobile IIoT applications, and (iii) by relying on both ray-based modeling and system-level simulations, evaluate their performance in selected scenarios, where devices move at various speeds. The rest of this article is structured as follows. Section II summarizes the key challenges in the context of IIoT, while Section III discusses attractive and novel lines of technology development. Then, in Section IV we detail the proposed modeling methodology, and the respective numerical results follow in Section V. The work concludes in Section VI. II. CHALLENGES IN EMERGING IIOT SYSTEMS Advanced IIoT systems can be regarded from two different perspectives, as demonstrated in Fig. 1: (i) their physical proximity to the central control unit or the core network node, where we differentiate between local control and remote arXiv:1809.03583v1 [eess.SP] 10 Sep 2018 control operations, and (ii) their level of autonomy, where one could distinguish fully-autonomous and semi-autonomous systems. While today's technology trend leans towards fully autonomous system design, the intermediate steps in this direction will include remote controlled semi-autonomous devices, such as fleets of connected vehicles or marine vessels. The envisioned application domains of the IIoT are exceptionally vast and span across multiple industries, such as transportation, healthcare, surveillance, and environmental monitoring. As follows from Fig. 1, the complex and dynamic network of interconnected IIoT devices needs to be maintained and utilized efficiently. To this aim, several key requirements for critical IIoT systems emerge along the lines of (ultra-)low communication latency, seamless and reliable radio connectivity, low deployment and maintenance costs, as well as fast processing capabilities. To enjoy unconstrained mobility, future IIoT devices (cars, robots, drones, etc.), might need to remain battery-powered or battery-less (i.e., harnessing the energy of their surrounding environment or receiving it through the dedicated wireless power transfer). Indeed, energy dissipation of advanced IIoT equipment, such as robots and drones, becomes a growing concern as it threatens to limit the lifetime of the corresponding network infrastructures. Fortunately, recent progress in battery technology as well as in alternative power sources promises to alleviate this constraint and help achieve reliable connectivity performance. Further, the costs of providing maintenance to large numbers of advanced machines may become prohibitive, which is particularly crucial for emerging real-time control and fault diagnostics applications that involve moving industrial robots and drones. As a result, the emerging industrial services increasingly rely on integrated indoor/outdoor deployments of moving connected machines. To facilitate their efficient operation, such systems require novel 3D positioning and location tracking mechanisms that could potentially offer service continuity and reliability on a massive scale. Operating advanced IIoT sys-tems by relying solely on the conventional (2G-4G) broadband connectivity solutions might be insufficient, since such functioning may incur increased channel congestion and produce unpredictable response times. Another crucial direction is decreasing the computational burden of the individual IIoT devices, which may be achieved by offloading their more demanding tasks onto the edge/fog computing infrastructures [3]. While both edge and fog computing formulations -unlike the cloud computing paradigm -assume certain proximity to the target mobile device, there are inherent distinctions between the two concepts that are related to the level of separation between the nodes and their functionality [3]. III. KEY TECHNOLOGIES FOR NEXT-GENERATION IIOT One of the distinguishing features of the emerging 5G mobile systems is their reliance on the integration of the conventional (cmWave) and higher-frequency (mmWave) spectrum for improved throughput, latency, and reliability. While the vast majority of the current wireless systems employ frequencies below 6 GHz, we expect that advanced moving devices, such as cars, drones, and industrial robots, will increasingly rely on the above-6 GHz bands for their operation with better spectral efficiency (i.e., 30 bps/Hz and 15 bps/Hz in the downlink and uplink, respectively) and reduced over-the-air latency (i.e., < 2 ms). Generally, going higher in frequency requires rethinking many aspects of the state-of-the-art transceiver architecture, physical layer techniques, and system control solutions to maintain seamless service continuity. A. Above-6 GHz transceiver technology for IIoT At higher carrier frequencies, the need for highly directional transmission and reception grows as well. Highly directional transmission and reception are thus becoming key to delivering substantial beamforming gains to ensure communication link reliability as well as control interference to/from other IIoT devices in dense deployments. A natural approach to implementing these important mechanisms is to utilize massive antenna arrays with beamforming capabilities at both the transmitter (TX) and the receiver (RX). As all-digital solutions are costly and power inefficient when the number of antennas becomes large, alternative low-power approaches are based on all-analog and hybrid architectures [4]. Analog beamforming offers a simple solution to steer and align the TX and RX beams in mmWave communication, but it is typically limited to single-stream and single-user scenarios wherein only one baseband per radio frequency chain is available at each end. A viable alternative for complex IIoT systems is to employ hybrid analog/digital precoding structures that allow multiple streams to be transmitted/received in parallel. In this configuration, appropriate weights for multiple analog arrays may be designed jointly with digital precoding and combining. However, this creates challenges related to beam alignment and beam tracking, since time/frequency resources are taken away from the actual data communication. More importantly, as far as wearables, moving robots, or drones are concerned, mobility is yet another crucial factor to consider [5]. Positioning information can be employed to reduce the beam alignment overheads as well as develop proactive beamforming techniques. Although the 5G standardization process is not yet complete, it has already been envisioned that most devices will need to be localized in future 5G radio systems with at least 1m of accuracy in at least 80% of the cases [6]. In [4], [7], novel results on mmWave-based positioning were demonstrated. It has been confirmed that by leveraging the availability of multiple antennas at the TX and RX as well as utilizing large signal bandwidth for periodic pilot transmission in the mmWave bands, centimeter-level positioning accuracy can be achieved. In cmWave bands, another important study is illustrated in [8], where periodic transmission of uplink pilot signals is exploited for high-accuracy positioning in a network-centric manner. Therefore, in the next generation of IIoT implementations a feasible solution is to utilize the available cmWave networks for providing the initial positioning information to improve the efficiency of mmWave communication. Reliance on existing cmWave technologies is especially attractive for designing the TX and RX beamformers for future mmWave access, thus reducing communication latency in prospective IIoT deployments. B. Location-aware multi-connectivity for improved operation Cooperative and coordinated multi-point transmissionreception schemes and spatial domain interference management have been studied extensively in the past, and their performance limits are well understood [9]. Contemporary wireless standards already support efficient single-cell Multiple-Input Multiple-Output (MIMO) beamforming, which allows for smarter beamformer design to efficiently take advantage of multi-user diversity in the spatial domain. In connection to that, the basic operation of multi-cell MIMO transceiver processing has been covered by the 3GPP's Long Term Evolution (LTE) specifications and also includes support for fully centralized joint-processing coordinated multi-point (CoMP) transmission schemes. In prospective cloud radio access network (RAN) design, the joint beamforming is fully centralized, and the base stations act merely as virtual Transmission-Reception Points (TRPs). They are typically connected to a Virtual Central Unit (VCU) in the cloud over a high capacity and low latency backhaul link. The VCU may then exploit joint processing to utilize multiple TRPs for beamforming simultaneously. Most of past research efforts have not considered how multiconnectivity based resource allocation for moving IIoT devices could be augmented with precise positioning information. Importantly, such information needs to be made available in 3D space and in real-time, so that link blockage (by other IIoT machines) and self-blockage (by parts of the actual communicating machine) could be mitigated timely and without causing harmful session interruptions. Connectivity across multiple TRPs is assumed to be vital for robust and reliable communication, especially when real-time computation offloading onto the edge-computing infrastructure is required on the move [10]. An important challenge in IIoT for supporting parallel concurrent links/beams to/from multiple TRPs is the requirement for the network to track the direction (and, generally, the channel state) of each link. Unlike in legacy scenarios, where each mobile terminal is served solely by its closest TRP, dynamic multi-connectivity offers a possibility to provide parallel data streams even in the line-of-sight (LoS) conditions, given that the TRPs are separated geographically (assuming sufficient angular difference). From the MIMO processing perspective [11], this means that multiple beams can be utilized to increase either data-rate capacity or communication reliability. Hybrid analog-digital beamforming architectures imposed by the use of carrier frequencies above 6 GHz render multiuser interference management for multipoint connectivity to be particularly challenging. There is an underlying trade-off between higher data rate and better reliability, which should be understood in more detail for future IIoT systems. Another factor that impacts the choice of the reliability/rate balance point is connected with the beamwidth of each TRP link. Wider beamwidth provides lower communication rates but remains less susceptible to the loss of radio connection. On the other hand, in multi-connectivity systems, wider beams -and hence lower rates per beam -may be compensated by establishing multiple parallel streams between the moving IIoT device and its serving set of TRPs. C. Positioning-aided dynamic computation offloading Historically, positioning and communication system architectures have been evolving disjointly in existing (cmWave) communication and navigation systems. As of today, no lowpower positioning technology is able to seamlessly scale from tens of meters to few centimeters of accuracy, neither indoors nor outdoors. Advanced mmWave IIoT systems will be based on dense deployments of TRPs, narrow antenna beams, and very large bandwidths, and thus potentially have sufficient capacity to achieve very accurate positioning under acceptably low energy consumption. Building on this, industrial environments can highly benefit from accurate positioning information, since timely localization of machines, environment mapping, and mobility prediction can enable robust and proactive edge-cloud offloading of demanding computations. In particular, an industrial machine can push its raw multimedia data to the nearby cloud-computing infrastructure in real-time, instead of processing it locally (due to energy constraints). As a result, its human or robot operator can receive the already processed multimedia stream through the cloud infrastructure and then take the necessary action(s). This eventually results in latency-controlled and power-optimized endto-end communication, location-aware interference mitigation, and improved throughputs delivered to highly mobile IIoT devices. It could be achieved, for example, by using geometric beamforming, also known as location-based or location-aware beamforming. In this context, security and privacy of the localization solutions become important aspects, even though they are outside the scope of this paper. A recent survey on these may be found in [12]. As an advanced application, we envision a moving IIoT device that leverages high-bandwidth connections to offload its processing tasks to the edge-server in real-time. The novelty of this vision stems from two premises: (i) the mmWave positioning in 3D has not been addressed sufficiently so far for real-time applications and (ii) the cmWave moving object tracking and direction finding with low computational resources remain a challenging issue that is still not resolved. Future research efforts will need to concentrate on understanding the extent to which the task of a moving IIoT device (e.g., in remote control, monitoring, or diagnostics applications) can run locally vs. be offloaded onto the proximate edgecomputing infrastructure over the mmWave links despite their intermittent nature (sudden blockages, higher path loss, etc.). A. Representative urban scenario We begin by introducing an illustrative IIoT use case and then describe our modeling environment. A summary of selected numerical results follows in the next section with the aim to understand the benefits of positioning-aided IIoT communication at high frequencies. Our performance evaluation comprises two successive phases: (i) estimation of positioning quality for advanced IIoT devices, which are tracked with cmWave-based techniques that employ a positioning algorithm from [8], and (ii) system-level performance evaluation based on appropriate PHY abstraction and MAC representation for mmWave-based IIoT communication. By adopting this two-phase evaluation methodology, we are able to reduce the target problem of joint communication and positioning to two tractable sub-problems with lower complexity. It allows us to combine the accuracy of raytracing link-level modeling with the flexibility of system-level simulation. B. Positioning error estimation In what follows, we consider three distinct types of moving objects: drones that are characterized by their average speed of 20 kmph and antenna altitude of 5 m; vehicles with 40 kmph mean speed and 1.5 m antenna height; and pedestrians having the speed of 6 kmph on average and carrying the IIoT devices with 1.2 m-high antennas. These target IIoT objects move along random trajectories in an urban environment captured by a typical outdoor Madrid grid as proposed by METIS 2 . In our reference setup, wireless infrastructure is assumed to be dense (by e.g., deploying RAN nodes on top of the street furniture), thus resulting in relatively short inter-site distances (ISDs) and predominantly LoS operation. In the first phase of our conducted evaluation, radio propagation channels are modeled with a comprehensive ray-tracing tool, which mimics all of the relevant multipath components and emulates wireless interference in a realistic manner. In our urban setup described above, the target IIoT device periodically (once in 10 ms) transmits omnidirectional pilot signals to its neighboring TRPs. For that matter, it utilizes OFDM waveforms with 40 pilot sub-carriers and 15 kHz of sub-carrier spacing, which span a relatively small effective bandwidth of 3 MHz. Based on thus acquired channel state information, the TRPs receiving the signal in LoS conditions estimate its direction and time of arrival (DoA and ToA, respectively), as well as communicate this knowledge to the central processing unit. The latter relies on such data received from the two closest TRPs in LoS and calculates the resulting location estimate. We note that two different estimation and tracking approaches are employed in the positioning phase for the purposes of their further comparison. First, in the more classical extended Kalman filter (EKF)-based positioning solution (referred to as the DoA-only EKF), only the available DoA measurements from the two closest TRPs are fused into the IIoT device's location estimates, whereas both DoA and ToA measurements are exploited by the second EKF-based positioning method (termed here the DoA&ToA EKF). In the second positioning solution, the tagged industrial device is assumed to have a time-varying clock offset, while the TRPs are assumed to be mutually asynchronous, with the constant clock offsets [8]. C. System-level performance evaluation In the second phase of our numerical assessment, we wrap the modeled trajectory of the target IIoT device into an abstraction of the cellular mmWave system by employing the appropriate MAC-layer and antenna beamforming considerations. In particular, we explicitly model the beam training phase, where the beam sweeping procedure is performed once per a dedicated time interval (e.g., 1 s and 5 s for the sake of comparison). For both the mmWave TRP and the IIoT device, we assume a set of uniform planar arrays -of 8 × 8 and 4 × 4 antenna elements each -that steer the beam in 16 and 8 directions, respectively [13]. The utilized shapes of practical beamforming patterns are displayed in Fig. 2 and 3 (right-hand side) [14]. The directions of the mmWave beams are then locked, and no beam tracking is conducted further on due to the system implementation complexity considerations. This beam position results in a situation where the SNR and thus the overall system performance may degrade between the following beamforming procedures in case if no positioning information is available. If the TRP is capable of informing the IIoT machines about their current location and orientation (derived as a result of the first phase of our evaluation), both the TRP and the moving device may adjust their beam selection results without the need for additional resource-consuming sweeps. We assume a certain level of the transmit power, which complies with the current FCC regulations subject to the directivity patterns of the considered mmWave antenna arrays. In our target setup, the IIoT device in question leverages the benefits of multi-connectivity [10] and is thus able to choose the best mmWave link to one of the neighboring TRPs based on the uplink signal strength as proposed in [13]. Finally, the resulting SNR is translated into the instantaneous data rate via Shannon's formula, which also accounts for the relevant mmWave overheads (including those induced by the beam training) according to our numerology anticipated for 5G mmWave cellular. In summary, Table I collects the core system-level modeling parameters. A. Position estimation results We begin our evaluation with a study on the capabilities of our considered 3D positioning algorithms, which is summarized in Fig. 4. Here, positioning error distributions are displayed for the two localization approaches discussed above and the ISDs of 25 m and 50 m. For all the four cases at hand, a positioning accuracy of around 1 m may be achieved in more than 80% of the situations (even in an asynchronous system). As was highlighted previously, this level of localization accuracy is expected to become the minimum requirement for positioning-related functionality in future 5G radio networks. Interestingly, our results indicate that the positioning accuracy of as high as 0.5 m can be reached in 85% of the situations when applying the more advanced DoA&ToA EKF Generally, higher deployment density of the TRPs may increase the number of handovers of the high-speed target device, hence potentially resulting in a degradation of the positioning performance, especially in an asynchronous network. However, the moving speeds of the considered IIoT machines are relatively low in urban environments, which implies longer connection times for a particular TX-RX pair. In these conditions, the positioning filters are able to capture a sufficient number of localization-related measurements and therefore estimate possible asynchronous effects in such measurements, which enhances the overall performance. B. System-level assessment outcomes In the second phase, we focus on the following two scenarios: (i) when no additional information is available between the two consecutive beamforming procedures (termed the baseline scenario), and (ii) when error-prone positioning information may be utilized at each transmission time interval (termed the proposed scenario). The proposed positioning-aided scenario is also compared to another similar approach in [15]. In addition, we address a reference hypothetical scenario where the exact location of the IIoT device is always known. It serves as an upper bound to benchmark the resulting performance. Here, the locations of the TRPs (with the ISD of 50 m as an example) are fixed and therefore remain precise. We compare the above-introduced scenarios in terms of the average spectral efficiency for the intervals of 1 s and 5 s between the periodic beamforming procedures as demonstrated in Fig. 2 and 3, respectively. For both of these periods, the upper bound (solid horizontal line) slightly decreases with an increase in the average LoS distance between the TX and the RX, which is determined by the actual elevation of the IIoT device. The performance of the baseline scheme (green bars) depends heavily on the speed of the target machine, since larger distances traveled without a location update naturally lead to more significant mmWave beam misalignment. As one may expect, the performance improves when the beamforming procedures are invoked more frequently. However, this also increases the overheads dramatically due to long beam training intervals. For the considered periods of 1 s and 5 s, the overheads are estimated to be on the order of 1-2%, while for the 10 s period they escalate. In contrast, our proposed solution (dark violet bars) demonstrates the values comparable to those for the theoretical upper bound, while at the same time keeping the beamforming overheads to a minimum. Another positioning-aided algorithm that we consider [15] (mallow bars) indicates a significant improvement with respect to the baseline case. However, it falls behind our proposed solution, and for the case of 1 s interval results in a similar performance as that of the baseline pedestrian scenario (Fig. 2). Finally, to understand the evolution of the signal quality between the neighboring TRPs, we additionally consider a dedicated scenario. Accordingly, the IIoT device in question travels on a line of the TRPs separated by the ISD of 50 m at the closest distance of 10 m to them (see Fig. 5 for an illustration of the SNR dynamics in case of a flying drone for the beamforming interval of 5 s). Again, the black line corresponds to an upper bound on the SNR (based on the perfect knowledge), while the green curve indicates a considerable quality degradation, especially when the device connects via non-LoS link (3GPP TR 38.901 urban microcell channel model for a street canyon is assumed). Our proposed solution, despite occasional instantaneous drops in quality, rapidly restores the connection and confirms excellent average performance, which remains close to that of the theoretical bound. In this work, we reviewed the major challenges and potential technology solutions to enable intelligent industry-grade communication of advanced devices (cars, drones, and moving robots) by taking advantage of location awareness and multiconnectivity operation. Our main conclusion is that, in order to achieve this complex target, it is necessary to comprehensively combine the expertise and know-how across the three lines of research: (i) efficient use of mmWave links, (ii) dynamic seamless multi-connectivity, and (iii) positioning-aided computation offloading for demanding industrial applications. More specifically, with our systematic two-phase numerical evaluation, we demonstrated the benefits of DoA and ToA-DoA positioning technologies for improved communication performance of advanced IIoT devices moving at various speeds across an urban landscape. The ray-tracing based component of our developed performance assessment methodology allowed us to contrast the operation of alternative location estimation and tracking methods. In our complementary system-level study, we specifically accounted for the advantages of dynamic multi-connectivity at mmWave frequencies (which offer larger available bandwidths) that permitted to achieve seamless communication on the move. Further studies need to be dedicated to improving the indoor positioning accuracy with hybrid cmWave-mmWave signals, as such signals will co-exist in the future IIoT. Another direction is analysis of multi-step evaluation methodologies by taking into account the diverse performance criteria as well as by extending our SNR and spectral efficiency centric results.
2018-09-10T13:29:07.000Z
2018-09-10T00:00:00.000
{ "year": 2018, "sha1": "cde9592bbaccfb78eb6f22c528934b6d3bb63177", "oa_license": null, "oa_url": "http://jultika.oulu.fi/files/nbnfi-fe202002195829.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "546ecd9796c648078a33db8f0bd820f30965e419", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
19742769
pes2o/s2orc
v3-fos-license
Book reviews Mary T. Malone, Women and Christianity, II: The Medieval Period AD 1000–1500 (Blackrock: Columba Press, 2001), ISBN 1–85607–339–4 (pbk), 198 pp. Nicholas Sagovsky, Ecumenism, Christian Origins and the Practice of Communion (Cambridge: Cambridge University Press, 2000), ISBN 0–521–77269–9 (hbk), 221 pp. John D. Zizioulas, Eucharist, Bishop, Church (trans. Elizabeth Theokritoff; Boston, MA: Holy Cross Orthodox Press, 2001), ISBN 1–885652–43–7 (pbk). Peter Matheson, The Imaginative World of the Reformation (Minneapolis: Fortress Press, 2001), ISBN 0–8006–3291–5 (pbk), xiv + 153 pp. Illus. Carol Rittner and John K. Roth (eds.), Pope Pius XII and the Holocaust (London/NY: Leicester University Press/Continuum, 2002), ISBN 0–7185–0274–2 (pbk), 256 pp. Paul Freston, Evangelicals and Politics in Asia, Africa and Latin America (Cambridge: Cambridge University Press, 2001), ISBN 0–521–80041–2 (hbk), xiii + 344 pp. as cantor cantorum; Gertrude of Helfta's Christ showered her with priestly authority, clearly not having learned 'that the Sacrament of Orders could be administered only to men'! Christ also supported Bridget of Sweden's attempts to establish a double monastery with lay brothers serving nuns, notwithstanding existing papal decrees forbidding any such possibility, though Christ and Bridget had eventually to modify their plans. There is also some attention given not only to the phenomenon of witch-burning (which persisted into Reformed Protestantism beyond the scope of this volume) but to the management of female prostitution, that persistent feature of 'Christianised' as of non-Christian societies. Unsurprisingly in a work which encompasses so great a period, and which professional ecclesiastical historians are finding a goldmine for research, there are things to quibble at, and some obvious points of criticism. In the chapter on Julian of Norwich and the much-misunderstood Margery Kempe, for instance, there is no direct explanation of the phenomenon of pilgrimage, or of its continuing significance in our time, whilst recognising Margery's importance as illuminating the life of a mother of fourteen children and how she negotiated her life in an age when a celibate anchoress rather than a harassed mother would be much more highly regarded. Addressing Jesus as 'mother' in the mode of an Anselm or a Julian needs to take Margery's story into account, and this connection is an example of an important point the author could have made, but does not. Nor, in her treatment of the traditions about Mary of Magdala in her final chapter on the 'two models' (the other being the Virgin Mother) does the author attend to the remarkable medieval interest in Mary of Magdala as a preacher. Bertholdus Heyder, for example, rephrased earlier tradition not only to demonstrate that she herself met all the contemporary criteria for a preacher, but that she established dignity for all women. And she was sometimes depicted with the triple aureola -as an ascetic, a virgin, and as a preacher -of interest particularly to Dominicans. In reading the book, therefore, one needs to be alert to its limitations, whilst valuing it for what it does accomplish. It could play a most useful role not only by helping us to recover a sense of the major contribution of women to western European Christianity in at least some contexts, but in enthusing readers to work their own way into a fascinating and richly rewarding period. In an era where too much theology appears to assume that 'history is bunk', readable reminders of the importance of paying attention to tradition in all its variability and complexity most certainly have their place. ANN The first draft of this book was the text of the Hulsean Lectures, given in Cambridge in 1996. The author owes a special debt to the meetings and deliberations of the Anglican-Roman Catholic International Commission (ARCIC) of which he is an Anglican member. The book presents an exploration of the different meanings of the Greek term koinonia and/or the Latin translation communio in the Hellenistic and North African (Western) context of the early Church. The discussion focuses not only on the presentation of koinonia as participation in (or expression of) the life of God, but also considers the consequences of this ecclesiological concept for the ecumenical debate. The concept/theology of koinonialcommunio has become a kind of ecumenical passe-partout in the last four decades. 'Here is a way of presenting the Christian faith that takes in a fundamental concern with God as Trinity (fcoinonia in God), with human beings as made for koinonia, with ecclesiology and the doctrine of salvation (koinonia with God and with human beings) and ethics (living in and for koinonia). It engages with much wider debates in society about community, about what it means to be a person, and about human relations' (p. 18). The author examines and discusses the role the term koinonia comes to play in the actual ecumenical bilateral and multilateral dialogues. He mentions especially ARCIC (pp. 18-21) (Church as Communion 1991), the 'Meissen Declaration' (1991) (pp. 24-5), the 'Concordat of Agreement ' (1991) in the USA and the 'Porvoo Common Statement' (1992) (pp. 25-8), the (rather confused and confusing) debate in the Roman Catholic Church on the 'ecclesiology of communion' after the Second Vatican Council in which the protagonists Joseph Ratzinger and Walter Kasper both claim to be working within the context of the last Council (pp. 28-35), the Roman Catholic international dialogue with Lutherans and Orthodox (pp. 35-9) and multilateral discussions like those of the World Council of Churches' Baptism, Eucharist and Ministry report (1982) and the Fifth World Conference on Faith and Order, which produced the official Paper On the Way to Fuller Koinonia (1994) (pp. 41-5). The collection of texts from the ecumenical scene is quite useful, but commonly known. Helpful and maybe not so much known is his concise survey on the contributions of Plato (pp. 48-71) and to the development of the meaning of koinonia. Plato sees the community in the metaphor of a body in which the parts are not of equal importance, though all are interdependent (p. 52). Aristotle prefers a balance of the different exercises of power in the koinonia of the polis and a 'pluralistic nature' (Bernhard Crick) of authority (pp. 85, 87). The book tries to sketch a possible Jewish (of the Second Temple period) background of the word koinonia and finds it in the concept of the covenant as an awareness of being-in-relation to . The early Christian communities describe Jesus as the covenant (pp. 110-15). Nearer to the preaching of Jesus himself would have been an analysis of the concept of the basileia. Origen calls Jesus the auto-basileia, the basileia in person. The situations of the early Christian communities, the Marcan and Johannine communities, the communities at Jerusalem or Corinth (pp. 119-39) are very difficult to assess and to identify. In each case the question remains open whether the particular author of a concrete text reinforces the community in beliefs already held or inculcates foreign beliefs and convictions. A new use of the term koinonia is achieved by the Cappadocians (pp. 146-70). They detect in it a common formula to develop the Trinitarian orthodoxy and also a way which enables the Church, or in Basil of Caesarea the monastic community, to see itself as the body of Christ. Another point of interest is illustrated in the thinking of Augustine (pp. 171-93). The nexus of meanings suggested by koinonia can be found in his writings in a whole range of Latin words like communiolcommunicatio, partidpatio, sotietas, even respublica and civitas. Sagovsky's book can be seen as an advancement of the relevant study of John D. Zizioulas (Being as Communion, London: Darton, Longman & Todd, 1985) in a more and explicitly ecumenical-ecclesiological perspective. It is very helpful and sometimes inspiring. The resume seems to me that the theory and practice of koinonia are -from a historictheological point of view -a very plausible approach to the actual situation of ecumenism, even if they still raise as many questions as they resolve. Holy Cross Orthodox Press has reissued the second edition (1990) of Metropolitan John of Pergamon's 1965 doctoral dissertation in an English translation from the Greek by Elizabeth Theokritoff. In doing so, the publishers have done a service to the ecclesiastical, ecumenical and academic audiences for whom this book will be of interest. The ecumenical and historiographical issues raised and addressed here have not disappeared in the thirty-seven years since this book was written. The theme of the book is the role of the bishop and the Eucharist in the expression of the unity of the church in the first three centuries. The author seeks to correct what he sees as the 'imprisonment of church unity' by modern historiography, which has effectively made no place for the r6le of the bishop or the Eucharist in accounting for the unity of the Church. Thus both Protestant and Roman Catholic historiography come in for heavy critique by Zizioulas. He replies to these views with the thesis that the unity of the Church found expression above all in the celebration of the Eucharist by one church in one place, presided over by one bishop. The book is divided into three sections. The first attempts to establish a positive relationship between Eucharist, bishop and Church in the earliest Christian communities. The argument then moves to a discussion of the role of the Eucharist and the bishop in the formation of the 'Catholic Church' in the second and third centuries, and concludes with a study of the emergence in the third and fourth centuries of parishes under the eucharistic presidency of a presbyter rather than the one bishop. In my view this third section is the most significant of the book, for it addresses the question of the impact of the development of discrete parishes on the expression of the church's unity. In various forms this question still exercises ecumenists, and Zizioulas' proposals on it still need to be taken up in ecumenical conversation. One question I have about the book's argument has to do with its use of the term 'Eucharist', especially as it applies to the practices of the first three centuries. The study never really defines what the Eucharist was at that time. Liturgical studies in the past thirty-five years have shown that there was a large amount of diversity in the forms of the eucharistic prayer in early Christianity, and that the development of that prayer took different paths in different locales. How, if at all, does this diversity of development play a role in the development of the expression of Church unity in the first three centuries? Because this is not a general textbook in ecclesiology, one cannot expect it to be exhaustive. Its topic is circumscribed, a fact acknowledged by the author. However, in spite of the author's assertion in the preface to the second edition that 'the basic theses of this work are still sound and need no revision' (p. 5), it would have been helpful to readers interested in exploring the subject further to have included an updated bibliography. I agree that many of the author's conclusions have become common currency in academic and theological circles today. And yet, I wonder if the stature of the author will lead some to take it as the last word in a subject whose details are still very much under discussion. For example, with regard to the historical sources there has been a great deal of research since 1965 on the church order literature and on the Ignatian epistles. Not all scholars even today would agree with Zizioulas' conclusions regarding the ecclesiastical situation behind Ignatius of Antioch's exhortations regarding the mono-episcopate. If it were not possible for the author to include chapters bringing the work up to date, it would have done students a great service to have included more recent scholarship in the bibliography. From an ecumenical perspective, it would have been valuable to include bibliographical references to recent Faith and Order studies of the episcopate and perhaps even of the liturgy of the Eucharist, to point students toward sources demonstrating the influence of the author's ideas on contemporary ecumenical conversations. Finally, because this book could for some readers serve as the introduction to Metropolitan John's work, at least a brief bibliography of his writings would help such readers to see the directions his work has taken since 1965. This work still has much to say to the Orthodox Churches today in their struggle to address the jurisdictionalism which afflicts not only their relations with each other, but also their relations with other churches. There are some facts about the Reformation that every educated person knows. One is that it introduced antisemitism into Christianity. Another is that without it capitalism could never have come into being. A third is that, with its iconoclasm and its emphasis on the Word, it substituted a verbal religious culture for a visual one. Peter Matheson is to be congratulated on attempting to dispel at least one of these misconceptions in his latest book, originally given as a series of lectures at Edinburgh and published by T&T Clark in 2000. While his earlier Rhetoric of the Reformation (1998) looked at characteristic literary forms of communication in the sixteenth-century reformations, this book considers the 'images' at work in the same period, the new symbolic universe ushered in by the Reformation. Matheson's purpose is to show that the Reformation was not primarily about ideas (as the theologians tell us) or about social structures (as the social historians tell us), but about a spiritual paradigm shift, 'signposted by the creative metaphors of the preachers and teachers, the images in literature and art, the rhythms and melodies of the popular ballads and chorales which sang the Reformation into people's souls' (p. 6). To demonstrate his point, Matheson takes us through the rich pamphlet literature of early sixteenth-century Germany to show how biblical images of freedom, light, the holy city, and Christ crucified, among many others, were used to convey the Reformation message. In subsequent chapters, he considers the programmes for reform put forward by both urban and rural interests, the 'nightmares' into which these Utopias often turned (notably the Peasants' War on the one hand and the disintegration of the magisterial Reformation on the other), the working out of Reformation principle in the very difficult circumstances of Argula von Grumbach's life, and the spirituality of the Reformation, which Matheson identifies as 'a real hope, but by way of a gritty determination ... to face up to the theatre of real life, not to pretend it is an enchanted one' (p. 139). Here, then, is an extremely ambitious book, which goes a long way towards answering the question which recent Reformation scholarship has studiously avoided, 'Why would anyone have wanted to become Protestant?' One of its strengths is its attention to the pamphlet sources, which allows us to hear what seem to be genuine voices from the past. Another is its recognition that a symbolic universe is not the same as a visual culture, which enables Matheson to acknowledge that the Reformation could still work with a repertoire of mental imagery even when its more extreme supporters sought to smash and mutilate the physical icons they regarded as idols. There are, however, some weaknesses which must also be acknowledged. The first is, again, the ambitious nature of the project. To squeeze into fewer than 150 pages of text a controversial argument, a survey of recent historiography of the Reformation as a whole, and a wide range of cultural and intellectual allusions (from Bertolt Brecht in the first chapter to Paul Ricoeur in the last) is an amazing achievement, and it is not surprising that some corners have had to be cut. Several direct quotations from the pamphlet literature are not referenced. Other references are potentially baffling: for instance, it is not explained that pamphlet titles followed by a fiche and catalogue number refer to the Kohler/Hebenstreit/Weissmann collection, Flugschriften des frtihen 16.Jahrhunderts. An important question which Matheson does not address is whether Reformation pamphlets express genuine popular sentiment or simply the views of the educated elite who wrote them. It is important in view of his contention that through the images used in these pamphlets 'the voiceless were given a voice, the visionless a vision ' (p. 47). It is revealing that he regards a particular woodcut illustration as indicative of the reader's imagination rather than the author's or publisher's (p. 39). A more justifiable omission is of the mental imagery conveyed by Catholic publications, including Catholic pamphlets, in the same period. Matheson briefly describes Erasmus's image of the Church (p. 94); but more on the competing repertoire of Catholic images would have provided this account of the Protestant imagination with a fuller context. Also lacking (except in the discussion of reform programmes) is much on the latemedieval background to the images discussed here. This gives a one-sided view of the issue. There must already have been some inculcation of basic biblical imagery, however unconscious, for the pamphleteers' and preachers' allusions to light, freedom etc. to have any resonance. Equally, 'theological road-rage' (p. 81) was not a monopoly of Reformation debates: odium theologicum has a long, if not necessarily distinguished, history in the Christian Church. Despite these omissions (which is another way of saying that the book should have been twice as long!), Matheson has provided us with some essential insights into the Reformation as a spiritual event, and for that reason alone this book is to be recommended enthusiastically. In the flood of numerous works on this topic one welcomes a book which helps make sense out of the, frequently, contradictory mass of information (and misinformation!) and personal opinion. The book that Carol Rittner and John Roth have edited is one of those books which helps you do precisely that, in an area where sometimes separating fact from opinion or even fiction becomes difficult since oftentimes each of us has formed our opinions out of prejudices, biases and even distortions of the truth. The collection of contributors to Pope Pius XII and the Holocaust is not only impressive but also is of the highest quality and expertise. The book, which is divided into three parts, successfully treats three important areas of the debate on Pius XII: first, his involvement in the controversies surrounding the Shoah. Then the second part explores the person of the pope and his policies. Finally in the third section there is an evaluation of the pontificate of Pius XII. One can see from the beginning to the end of this fascinating work that a fundamental methodological principle has been employed, namely that of an attempt to contextualize and not to use hindsight as an evaluative historical principle. This obviously does not relieve us from the realization that is expressed in the Jewish scholar, Michael Marrus' analysis that when touching on the question of the holocaust all are in agreement that no one did enough. In spite of this human realization when judging history and historical figures other scientific and scholarly criteria must be used. What comes to light by the research conducted by these scholars is that there is still a mass of information that is not fully available to the general public. This is the point of John Pawlikowski's investigation into the facts of the pontificate of Pius XII. Here he uncovers a wealth of information which was little known or completely ignored. The question of the 'silence' of Papa Pacelli on many issues is certainly placed in a different light when this information is factored into the evaluation of what he did or did not do and how it was done. Here the diplomatic channels seem to have played an enormous role and we are assured that these channels were not silent. What appears to be important here is the fact that we cannot be persuaded by psychological arguments that are too often used on one side of the discussion when the action of Pius XII was quite decisive on a lateral front. In an extremely focused chapter on the Shoah and the Pope, Richard Rubenstein critically examines the position of the Vatican before and after the Second Vatican Council. This perspective, which treats the position toward religious pluralism that the Catholic Church had before and after the horrific holocaust experience, puts into context the goal of the Vatican in relationship to secular Jews in a Christian Europe. He poses the question as to where there was effectively a difference between this goal and the antisemitic programme of the Nazi regime. At Vatican II and in the post council period, we begin to see a clear position regarding religious pluralism and religious freedom. This chapter is one of the most challenging of the book. Another aspect of the discussions presented in this work is the ethical question. Here, too, the situation is very complicated and not easy to find a common opinion. Michael Phayer takes us through the intricacies of the discussion on this point. From his extensive research in archives in Germany and elsewhere he arrives at the conclusion that one might disagree with the position that Pacelli took in many regards but the decisions that he arrived at were taken without hatred or malice toward the Jews. Obviously this conclusion will certainly not content our Jewish brothers and sisters but it must be seen within the context of the ethics of the time. It is interesting that some discussions with the 'peace churches' on the question of just war and the use of force to protect threatened or endangered populations leads very much in the same direction. This demonstrates that the ethical and moral issues are still not clearly defined. By way of conclusion I recommend this book for all those who have been sufficiently confused by the mass of information and propaganda that has been produced about the key issue of the involvement of the Pope in the extermination of Jews and his collaboration with Nazisocialist regimes in this programme. At least it appears to this reviewer that a fair hearing has been given to Pacelli and the Vatican policies through a precise scholarly and systematic examination of important material surrounding the Shoah. This is done by both Christian and Jewish scholars and represents another important step on the road to the healing of memories and the beginning of an authentic Jewish-Christian theological dialogue. This is an important book on an important subject, written by a scholar from Latin America-one of the many places in the modern world where evangelical Protestantism is flourishing, notably in Pentecostal forms. The Foreword is provided by David Martin, one of the first academics in the West to appreciate the significance of evangelicalism as a global and quintessentially modem form of Christianity. Martin himself has been more concerned with the connections between Pentecostalism and culture; Freston's work complements this in so far as it focuses on the political rather than cultural implications of this crucial global movement. In so doing it fills an important gap in the literature. The sheer presence of evangelical forms of religion across vast areas of the modern world is the first point to appreciate, a fact that Western scholars (most notably Europeans) still find difficult to grasp. Indeed the most searching question for Europeans currently is to explain the absence of widespread and popular forms of evangelicalism in Europe rather than its presence elsewhere-a task which challenges, very profoundly, the dominant paradigms of Western understanding. This book, however, concentrates on the developing world: on Latin America, on Asia and on Africa. The chapters work round in a circle, beginning with Brazil and ending with the Spanish speaking countries of Latin America. One point is immediately apparent: the immense diversity both of evangelicals themselves and of their political views Herein moreover lies the book's greatest strength in that it works from an empirical base (indeed a multitude of empirical bases) to build a picture of the diverse contributions of evangelicals and the groups of which they are part to the political field. Generalisations in this diverse and complex world are out of order, a point made absolutely clear from page one. We cannot assume anything about the political outcomes of evangelicalism: we have instead to look at the wide diversity of cases on offer (both between and within the 27 countries described) to see what has happened / is happening in each place. A further point follows from this: that is the disconcerting fact (at least disconcerting for some) that organisational and contextual factorswhether defined in religious or socio-political terms-are often more important for understanding the politics of evangelicals than their theology. Such factors include size, social and ethnic composition, the position of the evangelical constituency relative to other confessions, internal church structures and conflicts, the degree of legitimacy of national myths and the presence or absence of international connections. The pendulum, however, should not swing too far. There are theological constraints for an evangelical community, which must be taken seriously by the members of the group in question, but also by those who observe their behaviour. Some outcomes, moreover, are more likely than others, and the more that we know about a situation, and the multiplicity of variables that are operative therein, the more likely we are to be able to predict with accuracy. This gradual capacity both to discern and to explain patterns illustrates the sociological task at its best. Particular attention is paid to rebutting two theories. The first is the socalled 'conspiracy' theory, 'which supposes American right-wing forces to be behind most political activity by evangelicals in the Third World' (p. 283). Freston, in contrast, assumes the autonomy of Third World evangelicalism unless proved otherwise. The second theory argues that the 'logic' of evangelicalism will produce similar results everywhere-more specifically that what has happened in Europe and North America will necessarily happen elsewhere. Once again, this may not be so, a point that returns us immediately to the empirical nature of the task. We must start from what really happens (what ordinary church members do and say) and not from what theories of whatever kind predict, or even from what church leaders or theologians indicate might be the case. In making us aware of the importance of empirical work, Freston has opened up a huge research agenda. In many of the cases covered in this book, he offers the first rather than the last word, a fact which he freely admits. It is for others to penetrate further, a task which requires a demanding combination of skills: detailed local knowledge, linguistic competence, considerable sensitivity to the aspirations of those involved and almost infinite patience. But given Freston's innovatory introduction, the importance of the questions can no longer be denied. Evangelicalism offers a way to be both modern and religious in the emergent world order; with this in mind, its political dimensions (amongst many other things) deserve our sustained and careful attention.
2018-05-31T10:05:24.226Z
1881-10-01T00:00:00.000
{ "year": 2002, "sha1": "cc8a1817da605d7a5e5281b8fdbf4e0b5a8bcfd9", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "TaylorAndFrancis", "pdf_hash": "62066e14ed76afe6828fd4873890c183da77fc0d", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }
85450461
pes2o/s2orc
v3-fos-license
Potential of the Solar Energy on Mars The problem of energy accessibility and production on Mars is one of the three main challenges for the upcoming colonisation of the red planet. The energetic potential on its turn is mainly dependent on the astrophysical characteristics of the planet. A short insight into the Mars environment is thus the compulsory introduction to the problem of energy on Mars. The present knowledge of the Martian environment is the result of more than two centuries of attentive observation on its astronomical appearance and, more recently, on its on-site astrophysical features. Recent surface measurements of Martian geology, meteorology and climate had fixed the sometime-unexpected image of a completely desert planet. Mars is one of the most visible of the seven planets within the solar system and thusfor its discovery cannot be dated, still the interest for Mars is old. It was easily observed from the ancient times by necked eye and the peculiar reddish glance of the planet had induced the common connection of the poor planet with the concept of war. The god of war and the planet that inherited his name had provoked, still from antiquity, curiosity and disputes on the most diverse themes. These disputes are at a maximum right now regarding the habitability of Mars. The red planet owes his color to still unexplained causes, where a yet undisclosed chemistry of iron oxides seems to be the main actor. The visit card of Mars is fast increasing in the quantity of data and is now quite well known (Bizony, 1998), as we observe from the description that follows. Introduction The problem of energy accessibility and production on Mars is one of the three main challenges for the upcoming colonisation of the red planet. The energetic potential on its turn is mainly dependent on the astrophysical characteristics of the planet. A short insight into the Mars environment is thus the compulsory introduction to the problem of energy on Mars. The present knowledge of the Martian environment is the result of more than two centuries of attentive observation on its astronomical appearance and, more recently, on its on-site astrophysical features. Recent surface measurements of Martian geology, meteorology and climate had fixed the sometime-unexpected image of a completely desert planet. Mars is one of the most visible of the seven planets within the solar system and thusfor its discovery cannot be dated, still the interest for Mars is old. It was easily observed from the ancient times by necked eye and the peculiar reddish glance of the planet had induced the common connection of the poor planet with the concept of war. The god of war and the planet that inherited his name had provoked, still from antiquity, curiosity and disputes on the most diverse themes. These disputes are at a maximum right now regarding the habitability of Mars. The red planet owes his color to still unexplained causes, where a yet undisclosed chemistry of iron oxides seems to be the main actor. The visit card of Mars is fast increasing in the quantity of data and is now quite well known (Bizony, 1998), as we observe from the description that follows. Mars as seen before the space age As far as the knowledge of the solar system has gradually extended, from optical, groundbased observations to the present astrophysical research on site, Mars appears as the fourth planet as starting from the Sun. The reddish planet of the skies, nicely visible by necked eyes, has attracted the most numerous comments during the time regarding the presence of life on an extraterrestrial planet. With all other eight planets, except for Pluto-Charon doublet, Mars aligns to a strange rule by orbiting the Sun at a distance that approximates a km at mean from the center of Sun. The power rule of Titius-Bode 1 , modified several times, but originally described as 1 (4 3 sgn2) / 1 0 |0 , 9 n an n − =+ × × = gives a better distribution, It is immediately seen that the primary solar radiation flux is roughly two times smaller for Mars than it is for Earth. More precisely, this ratio is equal to 2.32. This observation for long has suggested that the climate on Mars is much colder than the one on Earth. This has not removed however the belief that the red planet could be inhabited by a superior civilization. Nevertheless, beginning with some over-optimistic allegations of Nicolas Camille Flammarion (Flamarion, 1862) and other disciples of the 19-th century, the planet Mars was for a century considered as presenting a sort of life, at least microbial if not superior at all. The rumor of Mars channels is still impressing human imagination. When estimates begun to appear regarding the Martian atmosphere and figures like 50 mbar or 20 mbar for the air pressure on Martian ground were advanced (Jones 2008), a reluctant wave of disapproval has been produced. It was like everybody was hoping that Mars is a habitable planet, that we have brothers on other celestial bodies and the humankind is no more alone in the Universe. As more data were accumulating from spectroscopic observations, any line of emission or absorption on Mars surface was immediately related to possible existence of biological effects. Even during the middle 20-th century the same manner was still preserving. In their book on "Life in the Universe" Oparin and Fesenkov are describing Mars in 1956 as still a potential place for biological manifestations (Oparin & Fesenkov, 1956). The following two excerpts from that book are relevant, regarding the claimed channels and biological life on Mars: "...up to present no unanimous opinion about their nature is formed, although nobody questions that they represent real formations on the planet (Mars)..." and at the end of the book "On Mars, the necessary conditions for the appearance and the development of life were always harsher than on Earth. It is out of question that on this planet no type of superior form of vegetal or animal life could exist. However, it is possible for life, in inferior forms, to exist there, although it does not manifest at a cosmic scale." The era of great Mars Expectations, regarding extraterrestrial life, took in fact its apogee in 1938, when the radio broadcast of Howard Koch, pretending to imaginary fly the coverage of Martian invasion of Earth, had produced a well-known shock around US, with cases of extreme desperation among ordinary people. Still soon thereafter, this sufficed to induce a reversed tendency, towards a gradual diminution of the belief into extraterrestrial intelligence and into a Martian one in particular. This tendency was powered by the fact that no proofs were added in support of any biological evidence for Mars, despite the continuous progress of distant investigations. Still every of the 36 great oppositions of Mars, since the "canali" were considered by Giovanni Schiaparelli in 1877, prior to the space age in 1957, like the series in 1901,1911,1941,1956 was only adding subjective dissemination of channels reports, with no other support for the idea that on Mars any form of biological life exists. After Schiaparelli and Flammarion, the subjective belief in a Martian life is successively claimed by the well known astronomers Antoniadi, Percival Lowel, A. Dollfus, G. A. Tihov and others. Any documentation of a hostile environment on Mars was received with adversity. Despite later spectroscopic and radiometric measurements from Earth, which were revealing a very thin atmosphere and extreme low temperatures, still in the immediate down of the space age the pressure on the Martian soil was yet evaluated at 87 mbar (Oparin & Fesenkov, 1956), overrating by more than ten times the actual value, irrefutably found after 1964. It is a pregnant evidence of how subjective the world could be in administrating even the most credible scientific data in this delicate subject. A piece of this perception is surviving even today. Mars during the space age With the availability of a huge space carrier, in fact a German concept of Görtrupp, the Soviets started to built a Mars spacecraft during 1959, along to the manned spacecraft Vostok. The launch took place as early as in October 1960, mere 3 years after Sputnik-1, but proved that only mechanical support is of no much use. The restart of the accelerator stage from orbit was a failure that repeated a few weeks later with similar non-results. The boost towards Mars commenced again with Mars-1 and a companion during the 1962 window, ending in failure again. It followed this way that a much smaller but smarter US device, called Mariner-4, despite of the nose-hood miss-separation of its companion Mariner-3, had marked the history of Mars in 1964 with a shaking fly-by of the planet and several crucial observations. These stroke like a thunder: the radio occultation experiment was suddenly revealing an unexpectedly low atmospheric pressure on Mars of approximately 5 mbar, much far below the most of the previous expectations. The life on Mars was bluntly threatened to became a childish story. Primitive but breathtaking images from Mariner-4 were also showing a surface more similar to the Moon's one than any previous expectation could predict. A large number of craters were mixed with dunes and shallow crevasses to form a desert landscape and this appearance produced a change in the perception of the formation of the solar system at large. No channel was revealed and none of the previously mentioned geometrical marks on the Martian surface. The wave of disappointment grew rapidly into a much productive bolster for deepening these investigations, so incredible those results were. Mars's exploration proceeded with Mariner-6 and 7 that performed additional Martian fly-byes in 1967 only to confirm the portrait of a fully deserted planet. www.intechopen.com The present hostile environment on Mars and the absence of life seem for now entirely proven, but it still remains to understand when this transform took place, whether there was sometime a more favorable environment on Mars and all issues regarding the past of Mars. We say for now because our human nature bolsters us towards an ideal which we only dream of, but which, as we shall prove, is almost accomplishable to the level of present technologies. Anyhow, the nephews of our nephews will perhaps take it up to the end. We are speaking of Mars's colonization, process that could only take place after the complete transformation of the surface and of the atmosphere to closely resemble those on Earth, process we call terraforming. Whether planet Mars worth more than its terraforming, then it deserves all the money spent with this process. For the moment however, the Martian environment is far of being earth-like, as the compared data from the next The data in table 2 are based on the mean eccentricities of the year 2009 as given in the reference. Accordingly, a value of g=6.67428·10 -11 is used for the universal constant of gravitation. Second is the time unit used to define the sidereal periods of revolution around the Sun and is derived, on its turn, from the solar conventional day of 24 hours in January 1, 1900. The atmosphere of Mars is presently very well known and consists, in order, of Carbon dioxide (CO 2 ) 95.32%, Nitrogen (N 2 ) 2.7%, Argon (Ar) 1.6%, Oxygen (O 2 ) 0.13%, Carbon monoxide (CO) 0.07%, Water vapor (H 2 O) 0.03%, Nitric oxide (NO) 0.013%, Neon (Ne) 2.5 ppm, Krypton (Kr) 300 ppb, Formaldehyde (H 2 CO) 130 ppb [1], Xenon (Xe) 80 ppb, Ozone (O 3 ) 30 ppb, Methane (CH 4 ) 10.5 ppb and other negligible components. This composition is further used to assess the effect of solar radiation upon the dissociation and los of the upper Mars atmosphere and upon potential greenhouse gases. The present atmosphere of Mars is extremely tenuous, below 1% of the Earth one and seemingly unstable. Seasonal worming and cooling around the poles produce variations of up to 50% in atmospheric pressure due to carbon dioxide condensation in winter. The values in table 18.2 are rough means along an entire year, as measured by Viking-1 and Viking-2 lenders, Pathfinder and Phoenix station recently. The greenhouse effect of carbon dioxide is considered responsible for 5º increment of atmospheric temperature, very low however, with only occasional and local worming above 0ºC on privileged equatorial slopes. The chart of the present Martian atmosphere (Allison & McEwen, 2000) is given in figure 18.1. The exponential constant for pressure on Mars is H=11,000 m. These atmospheric characteristics stand as the database in evaluating the efficiency of the solar-gravitational draught, closed-circuit powerplant, which we propose to be employed as an energy source on Mars. Mars presumptive past It is generally considered that the planetary atmospheres, including that of Mars, went through major transformations at the beginning of formation of the solar system (Brain 2008). Present models show a gradual and fast depletion of Martian atmosphere, with essential implications for designing the Mars's terraforming. Even today, the depletion of Mars's atmosphere continues on a visible scale. The observations, recently made by Mars Express and Venus Express, have allowed scientists to draw similar conclusions for both Mars and Venus, through direct comparisons between the two planets. The phenomenon proves this way as nonsingular and systematic (Brain, 2008). The results have shown that both planets release beams of electrically charged particles to flow out of their atmospheres. The particles are being accelerated away by interactions with the solar wind released by the Sun. This phenomenon was observed by the two spacecrafts while probing directly into the magnetic regions behind the planets, which are the predominant channels through which electrically-charged particles escape. The findings show that the rate of escape rose by ten times on Mars when a solar storm struck in December 2006. Despite the differences among the two atmospheres, the magnetometer instruments have discovered that the structure of the magnetic fields of both planets is alike. This is due to the fact that the density of the ionosphere at 250 km altitude is surprisingly similar, as shows the Venus Express magnetometer instrument. By observing the current rates of loss of the two atmospheres, planetary scientists try to turn back the clock and understand what they were like in the past. To build a model of Mars's atmosphere past we are only basing our presumptions on the in-situ observed tracks impressed by that past evolution. A very detailed computer modeling of the charged particles erosion and escape may be found in a recent paper from Astrobiology (Terada et al., 2009), with the tools of magneto-fluid-dynamics and today's knowledge of the Martian upper atmosphere and of the solar radiation intensity. First, recent orbital observations of networks of valleys in high lands and cratered areas of the Southern Hemisphere, especially those by Mars Express and Mars suggest that Mars exhibited a significant hydrologic activity during the first Gyr (Giga-year) of the planet lifetime. Some suggest that an ancient water ocean equivalent to a global layer with the depth of about 150 m is needed for the explanation of the observed surface features (Bullock & Moore, 2007). These authors suppose that the evolution of the Martian atmosphere and its water inventory since the end of the magnetic dynamo (Schubert et al., 2000) at the late Noachian period about 3.5-3.7 Gyr ago was dominated by non-thermal atmospheric loss processes. This includes solar wind erosion and sputtering of oxygen ions and thermal escape of hydrogen (Zhang et al., 1993). Recent studies (Wood at al., 2005) show that these processes could completely remove a global Martian ocean with a thickness of about 10-20 m, indicating that the planet should have lost the majority of its water during the first 500 Myr. As far as Mars was exposed before the Noachian period to asteroid impacts, as the entire solar planets did, their effect on atmospheric erosion was simulated. Furthermore, the study uses multi-wavelength observations by the ASCA, ROSAT, EUVE, FUSE and IUE satellites of Sun-like stars at various ages for the investigation of how high X-ray and EUV fluxes of the young Sun have influenced the evolution of the early Martian atmosphere. Terada et al. apply for the first time a diffusive-photochemical model and investigate the heating of the Martian thermosphere by photo-dissociation and ionization processes and by exothermic chemical reactions, as well as by cooling due to CO 2 IR-radiation loss. The model used yields high exospheric temperatures during the first 100-500 Myr, which result in blow off for hydrogen and even high loss rates for atomic oxygen and carbon. By applying a hydrodynamical model for the estimation of the atmospheric loss rates, results were obtained, which indicate that the early Martian atmosphere was strongly evaporated by the young Sun and lost most of its water during the first 100 -500 Myrs after the planets origin. The efficiency of the impact erosion and hydrodynamic blow off are compared, with the conclusion that both processes present the same rating. It is a common believe now that during the early, very active Sun's lifetime, the solar wind velocity was faster then the one recorded today and the solar wind mass flux was higher during this early active solar period (Wood et al., 2002(Wood et al., , 2005Lundin et al., 2007). As the solar past is not directly accessible, comparisons to neighboring stars of the same G and K main-sequence, as observed e. g. By the Hubble Space Telescope's high-resolution spectroscopic camera of the H Lyman-α feature, revealed neutral hydrogen absorption associated with the interaction between the stars' fully ionized coronal winds and the partially ionized interstellar medium (Wood et al. 2002(Wood et al. , 2005. These observations concluded in finding that stars with ages younger than that of the Sun present mass loss rates that increase with the activity of the star. A power law relationship was drawn for the correlation between mass loss and X-ray surface flux, which suggests an average solar wind density that was up to 1000 times higher than it is today (see Fig. 2 in Lammer et al., 2003a). This statement is considered as valid especially for the first 100 million years after the Sun reached the ZAMS (Kulikov et al. 2006(Kulikov et al. , 2007, but not all observations agree with this model. For example, observations by Wood et al. (2005) of the absorption characteristic of the solartype G star τ-Boo, estimated as being 500-million-year-old, indicate that its mass loss is about 20 times less than the loss rate given by the mass loss power law, for a star with a similar age. Young stars of similar G-and K-type, with surface fluxes of more than 10 6 erg/cm 2 /s require more observations to ascertain exactly what is happening to stellar winds during high coronal activity periods. Terada et al. (2009) are using a lower value for the mass loss, which is about 300 times larger than the average proton density at the present-day Martian orbit. These high uncertainties regarding the solar wind density during the Sun's youth prevent from attracting a confident determination of how fast the process of atmospheric depletion took place. To overcome this uncertainty, Terada et al. (2009) has applied a 3-D multispecies MHD model based on the Total Variation Diminishing scheme of Tanaka (1998), which can self-consistently calculate a planetary obstacle in front of the oncoming solar wind and the related ion loss rates from the upper atmosphere of a planet like Mars. Recently, Kulikov et al. (2006, 2007 applied a similar numerical model to the early atmospheres of Venus and Mars. Their loss estimates of ion pickup through numerical modeling strongly depended on the chosen altitude of the planetary obstacle, as for a closer planet to the star more neutral gas from the outer atmosphere of the planet can be picked up by the solar plasma flow. We shall give here a more developed model of the rarefied atmosphere sweeping by including the combined effect of electrically charged particles and the heterogeneous mixture of gas and fine dust powder as encountered in the Martian atmosphere along the entire altitude scale. Data were sought from the geology of some relevant features of the Mars surface like the Olympus Mons (Phillips et al., 2001;Bibring et al., 2006), the most prominent mountain in the solar system, to derive an understanding of the past hydrology on the red planet and the timing of geo-hydrological transformations of the surface. Olympus Mons is an unusual structure based on stratification of powdered soils. To explain the features, computer models with different frictional properties were built (McGovern and Morgan, 2009). In general, models with low friction coefficient reproduce two properties of Olympus Mons' flanks: slopes well below the angle of repose and exposure of old, stratigraphically low materials at the basal scarp (Morris and Tanaka, 1994) and distal flanks (Bleacher et al., 2007) of the edifice. The authors show that such a model with a 0.6° basal slope produces asymmetries in averaged flank slope and width (shallower and wider downslope), as actually seen at Olympus Mons. However, the distal margins are notably steep, generally creating planar to convex-upward topography. This is in contrast to the concave shape of the northwest and southeast flanks and the lowslope distal benches (next figure) of Olympus Mons. Incremental deformation is concentrated in the deposition zone in the center of the edifice. Outside of this zone, deformation occurs as relatively uniform outward spreading, with little internal deformation, as indicated by the lack of discrete slip surfaces. This finding contradicts the evidence for extension and localized faulting in the northwest flank of Olympus Mons. The wedgelike deformation and convex morphology seen in the figure appear to be characteristic of models with constant basal friction. McGovern and Morgan (2009) found that basal slopes alone are insufficient to produce the observed concaveupward slopes and asymmetries in flank extent and deformation style that are observed at Olympus Mons; instead, lateral variations in basal friction are required. The conclusion is that these variations are most likely related to the presence of sediments, transported and preferentially accumulated downslope from the Tharsis rise. Such sediments likely correspond to ancient phyllosilicates (clays) recently discovered by the Mars Express mission. This could only suggest that liquid water was involved into the formation of ancient Olympus Mons, but neither of the conclusions of this work is strong enough. Up to the present, those findings are some of the hardest arguments for ancient water on Mars for a geologically long period. The importance of this finding is in the promise that a thick atmosphere and wet environment could have been preserving for a long period and thus the Earth-like climate could withstand the depletion influences long enough. This is the type of encouraging arguments that the community of terraforming enthusiasts are waiting for. Problems are still huge, starting with the enormous amounts of energy required for the journey to Mars and back. Energy requirements for the Earth-Mars-Earth journey The problem of transportation from Earth to Mars and vice versa are restricted by the high amount of energy that must be spent for one kg of payload to climb into the gravitational field of the Sun up to the Mars level orbit and back to the Earth's orbit, where the difference in the orbital speed must also be zeroed. Astrophysical data from the first paragraph allow computing exactly this amount of energy. With the inertial, reactive propulsion of today, two strategies of transferring from Earth to Mars are possible and they belong to the high thrust navigation and to low thrust navigation. The energy requirements we consider are those for transfer from a low altitude, circular orbit around the starting planet to the low altitude, circular orbit around the destination planet. The very energy of injection into the corresponding low orbits should be added as a compulsory and given extra quantity, as far as the fragmentation of flight for pausing into a transfer orbit neither adds nor diminishes the energetic consumption of the total flight. It remains to determine the energy consumption for the transfer itself. The motion into a field of central forces always remains planar and is easily describer in a polar, rotating referential by the equation of Binet (Synge and Griffith, 1949), based on the polar angle θ, named the true anomaly. For a gravitational field this equation reads The solution is a conical curve r(θ) that may be an ellipse, parabola or hyperbola, depending on the initial position and speed of the particle. The expression of the orbit equation may be found while resuming the Cartesian, planar orthogonal coordinates {xOy}, where the position vector and the velocity are given by (Mortari, 2001) Between the local position on the conical orbit and the corresponding orbital velocity the relation exists, that derives from the conservation of energy or simply by calculating the velocity from the integral of the equation of motion, in this case with the Earth gravitational parameter K ⊕ , Multiplying the matrices and considering the case when the orbital eccentricity lies between 0 and 1, that means elliptical orbits, the usual energy equation of the orbital speed appears (Synge and Griffith, 1949) (4) These formulae are only needed to access the energy requirements for an impulsive transfer towards Mars and thus a so-called high thrust interplanetary transfer is resulting. The www.intechopen.com results supplied this way are bound to some approximations when considering the Earth-Mars actual transfer. They mainly come from the fact that the two planets move along slightly non-coplanar orbits, the angle between the two orbital planes is that given in Table 18.2, namely the inclination of Mars's orbit to the ecliptic is This circumstance induces a very little difference in the transfer energy however and all previous results from the coplanar approximation preserve quite correct. Energy for Earth-Mars high thrust transfers With moderately high thrust propulsion the acceleration for the transfer acts quick and the time of flight gets its actual minimum. The amount of energy is directly given by the equation of motion on the quasi-best, quasi-Hohmann transfer orbit that starts from around the Earth and ends around Mars (figure below). Fig. 3. Minimum energy Earth-Mars transfer To enter the higher altitude orbit as referred to the Sun, the transfer spacecraft must be provided with extra kinetic energy at the Earth orbit level, exactly equal to the perihelion energy of the transfer Hohmann ellipse, v P 2 . This difference is easily computed by using equation (4). When we introduce the gravitational parameter of the Sun K ○ and the location of the perihelion at Earth orbit level r P , the extra speed required to the spacecraft is ∆v P =v Pv E =32.746-29,7944=2.952 km/s. With the same procedure we observe that at the arrival on Mars orbit level, the spacecraft requires additional energy and speed to remain on the Mars At the Mars end of the flight, the spacecraft is challenged with a continuous fall towards Mars, as it enters the gravitational field of Mars with a hyperbolic velocity at infinity of 2.654 km/s. Similar computations show that at Mars arrival, namely at an altitude of merely 50 km above the Martian surface, where the aerodynamic breaking begins to manifest, the velocity relative to Mars raises to 5.649 km/s and must be completely slowed down. These transfer velocities are ending in 17.057 km/s. The high amount of energy at approach and landing on Mars can be diminished by air breaking, at least in part. It only remains then to assure the launching velocity from Earth, with its considerable amount. In terms of energy, this requirement equals 65 MJ for each kilogram of end-mass in hyperbolic orbit, to which the energy for lifting that kilogram from the ground to the altitude of the orbit must be considered. This extra energy of less than 2 MJ looks negligible however, as compared to the kinetic energy requirement from above. It must be considered that a value of 16 MJ only is required for the return travel to Earth from MLO (Mars low orbit). It is thus hard to understand the reasoning for the so-called oneway trips to Mars, or to be open "no return trips to Mars", warmly proposed by some experts in Mars colonization from Earth. A thick atmosphere facing erosion Long term events in the outer atmosphere of the planet are related to the interaction of the solar radiation and particles flow (solar wind) with the very rarefied, heterogeneous fluid envelope of the planet. A model of an electrically charged gas was recently used by Tanaka et al. (2009) to approach the erosion rate of the Martian atmosphere under high UV solar radiation. We add now the double phase fluid model that covers the dust dispersion proved even in 1975 by the Viking spacecraft to flow at high altitudes into the tinny atmosphere. First the equations of conservation and motion for the heterogeneous and rarefied fluid should be introduced. Equations governing the exo-atmospheric erosion While writing the equations of motion of a material particle of the rarefied gas and its condensed, particle content the relative referential related to the moving planet is considered. The equations of motion, referenced to a non-inertial coordinate system, depend on the relative acceleration of the particles and thus on the transport and complementary acceleration terms, given by Rivals and Coriolis formulae (Rugescu, 2003) ( ) where Ω t is the anti-symmetric matrix of the transport velocity (rotation). Four distinct types of time derivatives exist when the relative fluid motion is described (Rugescu, R. D., 2000). With an apostrophe for the absolute, total (material) time derivative and d /dt for the local total time derivative the vectors and associated matrices of relative motion are the other components presenting ortho-normal properties. Consequently, the global effect of the relative, non-inertial motion upon the acoustical behavior of the fluid can only be described in a computational 3-Dimensional scheme, although partial effects can be evaluated under simpler assumptions. The twin viscosity coefficient model observes the reology of the fluid with coefficients of viscosity introduced. The following NS-type, laminar, unsteady hyperbolic matrix equation system is written for both the gas fraction and the condensed part of the fluid mixture, and stands for example as the background of the Eagle solver (Tulita et al., 2002), used in many previous aerodynamic investigations at QUB and UPB. A single type of condensed particles is considered, in other words the chemical interactions are seen as minimal and not interfering with the flow process. To express more precisely the conservation of the gas and particles that fill the same elemental volume dV we observe that a fraction only α of the lateral area of the frontier is occupied by the gas, while the remaider fraction (1-α) is traversed by the condensed particles. Consequently, the area where the pressure acts upon the gas is αA i for each facet of the cubic volume element (Fig. 4). In orthogonal and 3D non-inertial coordinates this equations write out for the gas fraction and for the condensate as: www.intechopen.com where the vectors are used as defined by the following expressions, in connection with the above system (3) and (4): x y z σ xy σ yy σ xx σ xy +∂σ xy /∂x·dx σ xz +∂σ xz /∂x·dx σ xx +∂σ xx /∂x·dx The following notations were used in the expression of the vector K, referring to the distribution of condensed particles: The authors of the simulation show that if they use the same assumption as Pérez- de-Tejada (1992) or Lammer et al. (2003b) that the cold ions are lost through the entire circular ring area around the planet's terminator, a maximum O+ loss rate of about 1.2×10 29 s -1 is obtained. Integrating this loss rate over a period of ∆t=150 million years, a maximum water loss equivalent to a global Martian ocean with a depth of 70 m is obtained, which is quite impressive and shows that the erosion of the atmosphere is extremely severe. Solar-gravitational draught on Mars The innovative model of the solar-gravitational air accelerator for use on Mars and other celestial bodies with thin atmosphere or without atmosphere is the closed circuit tower in figure 6. Fig. 6. Closed-circuit thermal-gravity tower for celestial bodies without atmosphere. The air ascent tunnel is used as a heater with solar radiation collected through the mirror array, while the descent tunnels are used as air coolers to close the gravity draught. According to the design in Fig. 6, a turbine is introduced in the facility next to the solar receiver, with the role to extract at least a part of the energy recovered from the solar radiation and to transmit it to the electric generator, for converting to electricity. The heat from the flowing air is thus transformed into mechanical energy with the payoff of a supplementary air rarefaction and cooling in the turbine. The best energy extraction will take place when the air recovers entirely the ambient temperature before the solar heating, although this desire remains for the moment rather hypothetical. To search for the possible amount of energy extraction, the quotient ω is introduced, as further defined. Some differences appear in the theoretical model of the turbine system as compared to the simple gravity draught wind tunnel previously described. The process of air acceleration at tower inlet is governed by the same energy (Bernoulli) incompressible (constant density ρ 0 through the process) equation as in the previous case, The air is heated in the solar receiver with the amount of heat q, into a process with dilatation and acceleration of the airflow, accompanied by the usual pressure loss, called sometimes "dilatation drag" [8]. Considering a constant area cross-section in the heating solar receiver zone of the tube and adopting the variable γ for the amount of heating rather then the heat quantity itself, with a given value for the continuity condition shows that the variation of the speed is given by No global impulse conservation appears in the tower in this case, as long as the turbine is a source of impulse extraction from the airflow. Consequently the impulse equation will be written for the heating zone only, where the loss of pressure due to the air dilatation occurs, in the form of eq. 27, A possible pressure loss due to friction into the lamellar solar receiver is considered through Δp R . The dilatation drag is thus perfectly identified and the total pressure loss Δp Σ from outside up to the exit from the solar heater is present in the expression Low temperature www.intechopen.com Observing the definition of the rarefaction factor in (24) and using some arrangements the equation (28) The thermal transform further into the turbine stator grid is considered as isentropic, where the amount of enthalpy of the warm air is given by If the simplifying assumption is accepted that, under this aspect only, the heating progresses at constant pressure, then a far much simpler expression for the enthalpy fall in the stator appears, To better describe this process a choice between a new rarefaction ratio of densities ρ 3 /ρ 2 or the energy quota ω must be engaged and the choice is here made for the later. Into the isentropic stator the known variation of thermal parameters occurs, The air pressure at stator exit follows from combining (32) and (29) to render Considering a Zölly-type turbine the rotor wheel is thermally neutral and no variation in pressure, temperature and density appears. The only variation is in the air kinetic energy, when the absolute velocity of the airflow decreases from 3 c to 1 3 sin α c and this kinetic energy variation is converted to mechanical work outside. Consequently ( The air ascent in the tube is only accompanied by the gravity up-draught effect due to its reduced density, although the temperature could drop to the ambient value. We call this quite strange phenomenon the cold-air draught. It is governed by the simple gravity form of Bernoulli's equation of energy, The simplification was assumed again that the air density varies insignificantly during the tower ascent. The value for p 3 is here the one in (35). At air exit above the tower a sensible braking of the air occurs in compressible conditions, although the air density suffers insignificant variations during this process. The Bernoulli equation is used to retrieve the stagnation pressure of the escaping air above the tower, under incompressible conditions Value for p 5 from (36) and for the density ratio from (24) and (33) It is observed again that up to this point the entire motion into the tower hangs on the value of the mass flow-rate, yet unknown. The mass flow-rate itself will manifest the value that fulfils now the condition of outside pressure equilibrium, or This way the local altitude air pressure of the outside atmosphere equals the stagnation pressure of the escaping airflow from the inner tower. Introducing (38) in (39), after some rearrangements the dependence of the global mass flow-rate along the tower, when a turbine is inserted after the heater, is given by the final developed formula: where the notations are again recollected 0 2 0 ρ ρ ρ γ − = , the dilatation by heating in the heat exchanger, previously denoted by r; ω = the part of the received solar energy which could be extracted in the turbine; Δ R p = pressure loss into the heater and along the entire tube either. All other variables are already specified in the previous chapters. It is clearly noticed that by zeroing the turbine effect (ω = 0) the formula (40) reduces to the previous form, or by neglecting the friction, which stays as a validity check for the above computations. For different and given values of the efficiency ω the variation of the mass flow-rate through the tube depends parabolically of the rarefaction factor γ. Notice must be made that the result in (40) is based on the convention (30). The exact expression of the energy q introduced by solar heating yet does not change this result significantly. Regarding the squared mass flow-rate itself in (40), it is obvious that the right hand term of its expression must be positive to allow for real values of R 2 . This only happens when the governing terms present the same sign, namely The larger term here is the ratio ρ 00 /( ) pg , which always assumes a negative sign, while not vanishing. The conclusion results that the tower should surpass a minimal height for a real R 2 and this minimal height were quite huge. Very reduced values of the efficiency ω should be permitted for acceptably tall solar towers. This behavior is nevertheless altered by the first factor in (41) which is the denominator of (30) and which may vanish in the usual range of rarefaction values γ. A sort of thermal resonance appears at those points and the turbine tower works properly well. Reasons and costs for terraforming Mars Thicken Mars' atmosphere, and make it more like Earth's. Earth's atmosphere is about 78% Nitrogen and 21% Oxygen, and is about 140 times thicker than Mars' atmosphere. Since Mars www.intechopen.com is so much smaller than Earth (about 53% of the Earth's radius), all we'd have to do is bring about 20% of the Earth's atmosphere over to Mars. If we did that, not only would Earth be relatively unaffected, but the Martian atmosphere, although it would be thin (since the force of gravity on Mars is only about 40% of what it is on Earth), would be breathable, and about the equivalent consistency of breathing the air in Santa Fe, NM. So that's nice; breathing is good. Mars needs to be heated up, by a lot, to support Earth-like life. Mars is cold. Mars is damned cold. At night, in the winter, temperatures on Mars get down to about -160 degrees! (If you ask, "Celcius or Fahrenheit?", the answer is first one, then the other.) But there's an easy fix for this: add greenhouse gases. This has the effect of letting sunlight in, but preventing the heat from escaping. In order to keep Mars at about the same temperature as Earth, all we'd have to do is add enough Carbon Dioxide, Methane, and Water Vapor to Mars' atmosphere. Want to know something neat? If we're going to move 20% of our atmosphere over there, we may want to move 50% of our greenhouse gases with it, solving some of our environmental problems in the process. These greenhouse gases would keep temperatures stable on Mars and would warm the planet enough to melt the icecaps, covering Mars with oceans. All we'd have to do then is bring some lifeforms over and, very quickly, they'd multiply and cover the Martian planet in life. As we see on Earth, if you give life a suitable environment and the seeds for growth/regrowth, it fills it up very quickly. So the prospects for life on a planet with an Earth-like atmosphere, temperature ranges, and oceans are excellent. With oceans and an atmosphere, Mars wouldn't be a red planet any longer. It would turn blue like Earth! This would also be good for when the Sun heated up in several hundred million years, since Mars will still be habitable when the oceans on Earth boil. But there's one problem Mars has that Earth doesn't, that could cause Mars to lose its atmosphere very quickly and go back to being the desert wasteland that it is right now: Mars doesn't have a magnetic field to protect it from the Solar Wind. The Earth's magnetic field, sustained in our molten core, protects us from the Solar Wind. Mars needs to be given a magnetic field to shield it from the Solar Wind. This can be accomplished by either permanently magnetizing Mars, the same way you'd magnetize a block of iron to make a magnet, or by re-heating the core of Mars sufficiently to make the center of the planet molten. In either case, this allows Mars to have its own magnetic field, shielding it from the Solar Wind (the same way Earth gets shielded by our magnetic field) and allowing it to keep its atmosphere, oceans, and any life we've placed there. But this doesn't tell us how to accomplish these three things. The third one seems to us to be especially difficult, since it would take a tremendous amount of energy to do. Still, if you wanted to terraform Mars, simply these three steps would give you a habitable planet. The hypothetical process of making another planet more Earth-like has been called terraforming, and terraforming Mars is a frequently mentioned possibility in terraforming discussions. To make Mars habitable to humans and earthly life, three major modifications are necessary. First, the pressure of the atmosphere must be increased, as the pressure on the surface of Mars is only about 1/100th that of the Earth. The atmosphere would also need the addition of oxygen. Second, the atmosphere must be kept warm. A warm atmosphere would melt the large quantities of water ice on Mars, solving the third problem, the absence of water. Terraforming Mars by building up its atmosphere could be initiated by raising the temperature, which would cause the planet's vast CO 2 ice reserves to sublime and become atmospheric gas. The current average temperature on Mars is −46 °C (-51 °F), with lows of −87 °C (-125 °F), meaning that all water (and much carbon dioxide) is permanently frozen. The easiest way to raise the temperature seems to be by introducing large quantities of CFCs www.intechopen.com (chlorofluorocarbons, a highly effective greenhouse gas) into the atmosphere, which could be done by sending rockets filled with compressed CFCs on a collision course with Mars. After impact, the CFCs would drift throughout Mars' atmosphere, causing a greenhouse effect, which would raise the temperature, leading CO 2 to sublimate and further continuing the warming and atmospheric buildup. The sublimation of gas would generate massive winds, which would kick up large quantities of dust particles, which would further heat the planet through direct absorption of the Sun's rays. After a few years, the largest dust storms would subside, and the planet could become habitable to certain types of algae and bacteria, which would serve as the forerunners of all other life. In an environment without competitors and abundant in CO 2 , they would thrive. This would be the biggest step in terraforming Mars. Conclusion The problem of creating a sound source of energy on Mars is of main importance and related to the capacity of transportation from Earth to Mars, very limited in the early stages of Mars colonization, and to the capacity of producing the rough materials in situ. Consequently the most important parameter that will govern the choice for one or another means of producing energy will be the specific weight of the powerplant. Besides the nuclear sources, that most probably will face major opposition for a large scale use, the only applicable source that remains valid is the solar one. As far as the solar flux is almost four times fainter on Mars than on Earth, the efficiency of PVC remains very doubtfull, although it stands as a primary candidate. This is why the construction of the gravity assisted air accelerators looks like a potential solution, especially when rough materials will be available on Mars surface itself. The thermal efficiency of the accelerator for producing a high power draught and the propulsion of a cold air turbine remains very high and attractive. The large area of the solar reflector array is still one of the basic drawbacks of the system, that only could be managed by creating very lightweight solar mirrors, but still very stiff to withstand the winds on Mars surface.
2019-03-16T19:37:30.372Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "dd6bebb56245825ea552bf00edb1a26dff3e2039", "oa_license": "CCBYSA", "oa_url": "https://openresearchlibrary.org/ext/api/media/1ff16eb9-d570-4e19-92e0-406b7dcf1bd9/assets/external_content.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "777c10c36dac6d0e9ba0a9903c19d2a2b81ab07d", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
244790428
pes2o/s2orc
v3-fos-license
Time Optimization of Unmanned Aerial Vehicles Using an Augmented Path : With the pandemic gripping the entire humanity and with uncertainty hovering like a black cloud over all our future sustainability and growth, it became almost apparent that though the development and advancement are at their peak, we are still not ready for the worst. New and better solutions need to be applied so that we will be capable of fighting these conditions. One such prospect is delivery, where everything has to be changed, and each parcel, which was passed people to people, department to department, has to be made contactless throughout with as little error as possible. Thus, the prospect of drone delivery and its importance came around with optimization of the existing system for making it useful in the prospects of delivery of important items like medicines, vaccines, etc. These modular AI-guided drones are faster, efficient, less expensive, and less power-consuming than the actual delivery. Introduction In today's world, technology is getting increasingly more advanced and complex with every moment passing by. People are finding more ways to utilize energy in a more efficient way to obtain maximum output out of a machine, i.e., increasing efficiency to obtain the most use out of technology. Moreover, there is also the development of new technologies to boost the advancement and reduce human effort, minimize error, and bring about change for the betterment of society and mankind. About 100 years back, when two brothers left the land and took the first step towards the sky, mankind has not looked back since then. Rather, they are striving further away from land towards the sky and universe. Since then, there have been multiple advancements and now the sky is within our grasp. Still, it is difficult for many in the world to access these wonders due to their large size, cost, etc. Now, these problems have also been fixed as they are cheaper and can fit in the palm of our hand and are available everywhere. Moreover, like every other technology, for the next step, they are now being used for daily aspects of our life. These small highly efficient systems are unmanned aerial vehicles (UAVs). This paper is about increasing the efficiency so that they can perform better in that aspect of life. For example, recently, there is need for advancement in the delivery system, as the world is going through the COVID-19 pandemic, which is transmitted through human contact. There was a sudden need for contactless delivery and so drone trials were conducted for these deliveries, and now are being improved to completely take over human delivery. There is a major shortcoming in this area of research. While it is being advanced to follow the instructions to have a successful delivery, it is still running on a GPS and land tracking system. Now, these systems have the basic idea and outcome to give the minimum distance between two points, which is true in the case of travelling by land as all the constraints like mileage, performance, efficiency, etc. are in terms of distance Future Internet 2021, 13, 308 2 of 14 and speed. For instance, during a journey by land, it is generally described through two points or by distance like Mumbai to Pune, but in aviation, the journey is by time like a 2-hour flight, so clearly, time is of greater priority. On the other hand, UAVs are aerial vehicles where time is the measure of efficiency and output. A drone's efficiency is calculated by how much flight time is given by it. Thus, GPS (used for coordinates or tracking) cannot be used as a path. The more efficient path is one using less time. Moreover, with the drone's tech, distance and displacement mean the same thing. The upthrust and the forward speed work in sync to move the drone while the main challenge is to follow a certain path. The primary challenges discussed in this paper are as follows: • First, the drone has to follow the path. For this, the path should be defined and also be a system that controls the motors and the body of the UAVs for the path to follow; • Secondly, the height limits. There are certain maximum heights for UAVs to fly at, which are decided by the legal bodies. It limits the area and also the path choices and the efficiency as there has to be a constant path where height would not increase though the UAVs still move forward; • The UAVs are automatic. The machine should be capable of making the decisions of when to start, ascend or descend, or just move straight; • As there are no guides or drivers, the machine should be self-driven. The air is full of traffic both natural and man-made, with buildings also soaring beyond imagination into the sky. The UAVs must be installed with an AI or guiding system to overcome barriers, precautions, and damage control, and also specific conditions as to when to activate what system. This must be overlooked by an AI, which is complex to develop; • As the UAV is automatic and its development will result in an increase, there must be some tracking systems that continuously transmit and store ion servers with all the changes during the journey until the journey is completed and during an emergency, it should be located easily. In this paper, we design an algorithm that will not only define the path of the drone to reach its destination, but will also be an automated path, i.e., it does not require piloting, rather it is self-driven, and this algorithm helps to increase the efficiency and provide higher output and thus it can carry out a more complex task at a greater speed. Literature Survey The delivery of products using a drone is not very popular in a country with huge diversity like India. We are working on an efficient algorithm to reduce the time efficiency of the drone so that the drone can deliver the product by saving time [1]. Research has investigated the optimization of modular drones so that they can form an efficient delivery system. Specifically, the battery life and motor have been investigated to save time and prevent delays in the delivery of products. The carrier placed in the drone should be strong enough has also been considered so that it can handle the weight of the parcel [2]. A study has also been performed considering the dense traffic. They developed the shortest path to deliver the product to the customer, and also stated that they have to install a recharging system in a different location to recharge the drone when needed so that there will be no delay due to a power deficiency [3]. Cost-efficient delivery of their product has also been explored by making an algorithm to solve the vehicle routing problem, which is to deliver to as many customers as possible in a particular set, which in turn saves the cost of drones, fuels, etc. [4]. The efficient delivery of parcels has also been researched by considering the vehicle routing problem, which is the major problem, but this is solved by using some mathematical equations and graphs so that it is easy to deliver in a particular set. They have thought of equipping drones in a particular vehicle and the vehicle stands in the middle of all the locations and the drones when all the parcels are delivered in a particular time, which saves a lot of time and it a cost-efficient idea [5]. Research has been pereformed on battery-efficient drones' delivery, which makes the drones work more efficiently and can provide more working hours for the delivery of parcels. He decided Future Internet 2021, 13, 308 3 of 14 to insert two batteries in the drone that are fully charged and if one of the batteries is discharged, then the other battery replaces the position of the battery, i.e., the two batteries are swapped, thus making it more efficient [6][7][8]. An algorithm has been developed for energy and range minimization, which will consequently help in cost-minimizing. This will also make their delivery very easy and efficient. They have considered a vehicle that runs in the delivery place and finds the shortest part for every nearby location by using some mathematical graphs and algorithms and from the location, the vehicle will launch a drone, which delivers to all the nearby places and returns to the vehicle before the battery is discharged. This idea will make delivery to all the nearby parcels very easy. All these studies mostly worked on the shortest path, most efficient battery, and fastest delivery, but no research has worked on efficient delivery based on the wind flow and environment to save time for the delivery [9][10][11][12][13]. The drone delivery is air transport, but in air transport, the basic constraint for the measurement of efficiency is flight time. All air transport is calculated in terms of flight time and so the basic idea and thus the tracking system is useless. As such, the shortest distance might not give the shortest time. This research paper is about this aspect [14][15][16][17], mainly how to reduce time by using basic physics and algorithms to follow that path so that less time is used for reaching the destination and even considering the external interferences like wind speed, height, thrust, and even the weight of the carriage [18][19][20]. This paper proposes fuzzy control using two combinations of the algorithm: Virtual Reference Feedback Tuning (VRFT) and Active Disturbances Rejection Control (ADRC), to obtain the benefits of data-driven control and fuzzy control. This algorithm saves time in finding the optimal parameters of the controllers [21]. Working on the control problem for stochastic nonlinear systems, it is also based on fuzzy logic systems. They have also considered it with unmeasured states and unknown backlashlike hysteresis, and this is done to reduce the communication load over the system [22]. They have studied the concept of digital twins for an internal transport system, which is becoming more popular nowadays. It supports technologies like material flow. This system increases the efficiency of the production system [23]. A cyber-attack risk analysis that integrates Kaplan's and Garrick's approach and fuzzy theory is proposed, focusing on some of the main targets (database, internal networks, machinery) to prevent cyber attacks. This approach can be used for any type of mining and the obtained result will reveal the current cybersecurity status [24]. Problem Statement In air transport, the basic constraint for the measurement of efficiency is flight time. All air transport is calculated in terms of the flight time and so the basic GPS and thus the tracking system is useless, as the shortest distance might not give the shortest time. This research paper concerns this aspect [25][26][27][28]. The UAVs need a new set of instructions as they are self-driven and the algorithm has to feed in the coordinates in real-time to the UAVs to keep the drone on track and keep it from deviating from its path. Additionally, it has to check the coordinates for the destination and mid-point of the journey to change its modes. Moreover, the natural terrain and the legal boundaries have to be kept in mind. There is also a set of precautionary or emergency instructions too in case of any failure or damage. The UAVs should also be sending this information to a server or storing it to keep track and recover in case of system failure. How to reduce time by using basic physics and algorithms to follow that path so that less time is used for reaching the destination and even considering the external interferences like wind speed, height, thrust, and even the weight of the carriage are investigated. Proposed Augmented Path-Time Architecture In this paper, we found that there is a major flaw in UAV technology. Although the software is efficient, it is based on distance optimization techniques on which all the other Future Internet 2021, 13, 308 4 of 14 land transportation is based. Rather, the UAVs or other aerial vehicles have time as a major factor for calculating their efficiency and output. We used some great theories of physics and implemented them on aerial transportation to decrease the time and increase the output. In modern chipsets like mamba f7 above, they have started to add and modify the flight controllers to the needs of the users. Not only do they have new features like Wi-Fi, Bluetooth, and GPS tracking device, but they also provide a whole set of different empty sockets and even storage and chips to modify, integrate, and even change the functioning of the flight controller to our needs. Although it is still in development, most modern flight controllers have this functionality, making it expensive and complex. Blackbox It contains 16 m flash memory and thus the processing and temporary storage of the algorithm. All the cache and real-time applications run in this part. It is the brain of the quad that carries out all the functionality of the algorithm and gives out the coordinates in real-time. It controls the quad and calls for the values to generate the required results to carry out the task. Gyro Meter It is the main component of the flight controller as it calculates the roll, yaw, and pitch and keeps the balance of the UAVs, and keeps it steady. Additionally, it has some onboard memory for storing and modifying purposes. The gyro meter keeps checking the angle of the inclination of the drone as well as the pitch and roll to prevent any failure and make sure the drone completes its task safely. It also feeds in the angle to Blackbox to track the path during the descent and ascent of the journey. Curve of Ascent Although there can be many curves for the ascent, the most efficient in the case of timing is the inclined plane. The inclined plane at an angle of 45 degrees takes the least time and works to reach a point of height x meters and thus for the ascent, an inclined path to a certain height is calculated to minimize time. The inclined plane is the most efficient of all curves in the ascent, where all the forces of nature are bound to act against the drone. Therefore, an easy and efficient approach is easy to maneuver and maintain in the circumstance as the normal forces either cancel each other or some part of themselves in this curve, while all three coordinates depend on themselves to give more accurate and better coordinates for the drone to move. The equation for the inclined plane should be (i.e., angle of inclination should be 45 degrees) for a perfect incline plane: Curve of Descent The Brachistochrone curve or curve of the fastest descent is used to reach the destination point. It is the most efficient curve in case of descent as it increases the efficiency and decreases the time of flight by a great margin. The equation to be used in the algorithm is: where a is the distance of the centre to an endpoint and θ is the angle between the path and the angle of the UAVs. The brachistochrone curve is the fastest descent as with all the forces acting in favour of the drone and for this curve, only a small amount of adjustment and forces by the motors are required as all the others are provided by gravity and wind. The curve is free to move on the z-axis as the curve has specific angles and to accomplish it accurately, the drone has to have free movement along an axis to overcome any threat or damage in its way. The small step guarantees a higher gradient in altitude than any other curve, which makes it more prone to damage from things like buildings, animals, and stones so an axis free to move will prove to be an optimum solution. Euclidean Distance For the aerial transport to follow the path of ascent and descent accurately, the Euclidean distance is used. This function is used to calculate the distance between two different points. The two points (initial and final) denote the centre of the path detected. The two points are given coordinates assigned to them by the GPS module and then the Euclidean distance is calculated by a tracking algorithm that tracks the movement of the AV and calculates the coordinates and guides the vehicle to the path accurately and keeps checking if it is on track while simultaneously using the GPS module. All these calculations are done simultaneously with the data being analyzed and used in real-time. These algorithms and calculations are carried out using different modules and chipsets on the flight controller of the UAVs. Figure 1 shows the final path followed by the UAVs as calculated by the algorithm. Now, in this path, we considered only the x and y-axis as in contradiction to the 3d world to keep it simple, but the z-axis is taken as being equal to x and y for the ascent as it has to keep a constant angle with the path (45 degrees). The Euclidean distance interprets all the data in coordinates identifying every object with their coordinates and with its help, the drone will not only have decreased processing time but also by reading the coordinates of an incoming danger or a threat, the drone will also obtain help to identify the threat so it can work out the protocol to avoid it, thus helping in the task at hand and minimizing the chances of failure. For the descent, the Brachistochrone curve is mainly for the x and y-axis so the z-axis could be kept constant. GPS (Global Positioning System) The GPS is used for the location of the initial and final destination of the journey and also the tracking of the UAVs. The drone has to be tracked through a remote computer and all the data is stored and manipulated by an AI; however, the tracking can be easily done through the GPS while also monitoring the progress and the path of the drone. Additionally, Future Internet 2021, 13, 308 6 of 14 in case of failure, the GPS can be tracked to locate the drone for its retrieval. Figure 2 represents the architecture of the proposed augmented path-time model. Augmented Path Mapping Algorithm Algorithm 1 is the first initializer of the program it sets of the system. It is the trigger, after which all the later algorithms follow. The Euclidean function is initialized. This function is used for calculating the set of values that defines the set of coordinates that the drone has to follow for keeping on the path. It also calculates the midpoint of the path. This midpoint is used for finding the point of descent at which the drone has to decrease its altitude to reach the ground at precisely the exact location. To Algorithm 2 is the ascent function that is used to define the path the quad has to follow while starting its ascent. The path is to be defined on all the axis, i.e., x, y, z-axis. There is also a mid-variable that keeps checking the coordinates and at precisely the midpoint the function ascent is ended, or the regulated height through the barometer readings to fly below the legally allowed height for the UAVs. Now, there are two conditions if the midpoint or the height limit is reached first and that triggers either of the three or four functions, i.e., descent or the straight path. If the mid-point is reached first, then Algorithm 3 or the descent function is initialized. The descent function is used to track the descent path using their respective equations. It has an end variable, which checks the final coordinates with the live coordinates and thus the progress is checked. All this is calculated in real-time and modified simultaneously during the flight of the quad (see Algorithm 3). Algorithm 2 is the ascent function that is used to define the path the quad has to follow while starting its ascent. The path is to be defined on all the axis, i.e., x, y, z-axis. There is also a mid-variable that keeps checking the coordinates and at precisely the midpoint the function ascent is ended, or the regulated height through the barometer readings to fly below the legally allowed height for the UAVs. Now, there are two conditions if the midpoint or the height limit is reached first and that triggers either of the three or four functions, i.e., descent or the straight path. mid-point is reached first, then Algorithm 3 or the descent function is initialized. The descent function is used to track the descent path using their respective equations. It has an end variable, which checks the final coordinates with the live coordinates and thus the progress is checked. All this is calculated in real-time and modified simultaneously during the flight of the quad (see Algorithm 3). If the height limit is reached first, Algorithm 4 is initialized. The straight function limits the drone movement from further change in the y-axis as the optimum height for the UAVs is reached and as it cannot go above, it moves in a straight line until the midpoint of the whole path is reached. Algorithm 3 Algorithm for a descent All the coordinates beside the mid-point are checked in real-time and values are called every instance from the sensors in place. The values are then calculated again for the path. This system is highly efficient as any changes or complications during the flight θ − s in Algorithm 2 is the ascent function that is used to define the path the quad has to follow while starting its ascent. The path is to be defined on all the axis, i.e., x, y, z-axis. There is also a mid-variable that keeps checking the coordinates and at precisely the midpoint the function ascent is ended, or the regulated height through the barometer readings to fly below the legally allowed height for the UAVs. Now, there are two conditions if the midpoint or the height limit is reached first and that triggers either of the three or four functions, i.e., descent or the straight path. mid-point is reached first, then Algorithm 3 or the descent function is initialized. The descent function is used to track the descent path using their respective equations. It has an end variable, which checks the final coordinates with the live coordinates and thus the progress is checked. All this is calculated in real-time and modified simultaneously during the flight of the quad (see Algorithm 3). If the height limit is reached first, Algorithm 4 is initialized. The straight function limits the drone movement from further change in the y-axis as the optimum height for the UAVs is reached and as it cannot go above, it moves in a straight line until the midpoint of the whole path is reached. Algorithm 3 Algorithm for a descent All the coordinates beside the mid-point are checked in real-time and values are called every instance from the sensors in place. The values are then calculated again for the path. This system is highly efficient as any changes or complications during the flight Algorithm 2 is the ascent function that is used to define the path the quad has follow while starting its ascent. The path is to be defined on all the axis, i.e., x, y, z-ax There is also a mid-variable that keeps checking the coordinates and at precisely the mi point the function ascent is ended, or the regulated height through the barometer readin to fly below the legally allowed height for the UAVs. Now, there are two conditions if the midpoint or the height limit is reached first a that triggers either of the three or four functions, i.e., descent or the straight path. The descent function is used to track the descent path using their respective equ tions. It has an end variable, which checks the final coordinates with the live coordinat and thus the progress is checked. All this is calculated in real-time and modified simul neously during the flight of the quad (see Algorithm 3). Algorithm 2 is the ascent function that is used to define the path the quad has to follow while starting its ascent. The path is to be defined on all the axis, i.e., x, y, z-axis. There is also a mid-variable that keeps checking the coordinates and at precisely the midpoint the function ascent is ended, or the regulated height through the barometer readings to fly below the legally allowed height for the UAVs. Now, there are two conditions if the midpoint or the height limit is reached first and that triggers either of the three or four functions, i.e., descent or the straight path. Initialize function straight() Break//start straight Else continue If the mid-point is reached first, then Algorithm 3 or the descent function is initialized. The descent function is used to track the descent path using their respective equations. It has an end variable, which checks the final coordinates with the live coordinates and thus the progress is checked. All this is calculated in real-time and modified simultaneously during the flight of the quad (see Algorithm 3). If the height limit is reached first, Algorithm 4 is initialized. The straight function limits the drone movement from further change in the y-axis as the optimum height for the UAVs is reached and as it cannot go above, it moves in a straight line until the midpoint of the whole path is reached. Algorithm 3 Algorithm for a descent All the coordinates beside the mid-point are checked in real-time and values are called every instance from the sensors in place. The values are then calculated again for the path. This system is highly efficient as any changes or complications during the flight can be overcome by precautionary measures and then the path is again followed (see Algorithm 4). Calling Technique of the Algorithm When the algorithm is compiled, it calls for values from the Blackbox via the function call, which already has the simultaneous values of the respective modules, i.e., gyro, barometer, etc., which is used in the flight controller for balancing and motor control of the quad. The Blackbox then provides the values of the respective functions in the algorithm, which gives the next set of coordinates for the drone to move on and this process goes on, thus giving the final path that the drone has to follow for the final destination. The data is sent through the program at every single instance of time as the coordinates are generated throughout the journey for the progress and smooth running of the drone. All the data is processed for the next coordinates and each coordinate is checked to keep track of the ascent and descent. Figure 3 is a small representation of the working of the algorithm from input to output. Experimental Setup In Tables 1-3, below, we calculated the time of the original path and after on the modified path. Our research shows that if we can follow the path, the time optimization and thus the output of these UAVs can be increased for better usage of resources and better output. The efficiency is calculated by Equation (4): Table 2. Time of descent. Table 1 and Figure 4 show the difference in the initial ascent time and the improved time (theoretically). The efficiency is constant in all the cases due to the. fact that the conditions and assumptions are kept constant throughout. Although, in reality, the efficiency may somewhat differ due to some exterior constraints like wind speed, turbulence motor speed, etc, and the graph will always show an increasing curve. These results show that there will be an increase in efficiency if the drone moves on an inclined path with constant speed, thus decreasing time and finally the power usage. Table 2 and Figure 5 prove that the descent time can be decreased drastically. Moreover, regarding the initial and augmented path, both are taken to be free fall, i.e., no power usage. To keep the drone on the specified path, some forces are needed from the motors, but that is cancelled, as in the initial case too. To prevent the drone from losing control during free fall and guaranteeing safety, some thrust is maintained throughout. Thus, the energy is cancelled in both and the time frames can be decreased by the augmented Brachistochrone curve, thus giving more output for the same energy input with increasing efficiency. All the above graphs are made concerning time. As you can see from Figures 5-7, there is a rate of increase in efficiency in both the ascent and descent. Additionally, from the value of Tables 1-3, it is proved that if we apply the above system and include the changes in the present system, the time can be decreased by a significant factor. The graphs show that the traditional approach fails with the medium, i.e., when the medium moves from land to air, a new system is required, which provides a much better result. The graphs also indicate that if we apply for the whole journey, the UAVs have a much higher chance of completing the task than with the traditional approaches as it reduces one of the most limiting factors, i.e., the power factor. With power being used more efficiently, it can cover a greater distance using the same amount of power and thus increase the area of reach and number of tasks at any particular time. Thus, this indicates that if the above ascent and descent algorithms are used, it provides us with faster drones, which helps in two ways: one is making them more efficient. The ascent algorithm follows an inclined plane, which not only reduces the time but also decreases the path travelled while gaining both distance and height simultaneously. On the other hand, the descent algorithm is a brachistochrone curve and one of the major advantages of this curve is that it can be followed with minimum adjustments and even in a power-saving mode as basically, it uses gravity as its driving force and thus the cumulative efficiency of this path will provide a better overall performance and higher energy output. Efficiency of Descent For comparison, the path is taken to be a rectangle (2D) or a cuboid shape (3D), where the drone first follows a straight-line path to reach the maximum height and then to reach the destination as shown in Figure 7. It means that first there is a change in the y-axis or height only and then after reaching the maximum height, the drone moves forward. • Speed is constant and due to the presence of external thrust by motors; • The original path is a rectangle (2d) and cuboid (3d) and the final path is their diagonals, respectively; • Although the path is changed and calculated at each instance of time, for regularity in the results, it is kept constant and values are taken keeping that in mind. • In descent, UAV is in a free-fall condition and time is calculated for it; • No external force is taken into account apart from the forces keeping the drone on the path. Conclusions and Future Work In this paper, there is a way to increase the efficiency of the UAVs with the help of a new algorithm as we see today. The basic measure of these devices is based on distanceefficient systems to reduce their journey; however, as they are in the air, these physical boundaries like roads, plots, etc. disappear. Thus, the advantage of this algorithm is to improve the efficiency by reducing the time and completing tasks faster. This helps in using battery power for a longer period. It brings out more output from these UAVs, thus making them more efficient and increasing their flight duration drastically. A disadvantage of this algorithm is although it works efficiently, all the axes are fixed, i.e., two axes are fixed for movement while the third is for barrier dodging and so this fixation might be difficult to implement in this AI as some barriers cannot be prevented by just one axis. In this paper, we did not include the internal intelligence transport system, which may be carried out later on [23]. Additionally, the cyber security in our system should be implemented, such that there would not be any possibility for cyber-attacks, which may lead to many problems [24]. For future work, we can bring a more efficient system by using it in real-time applications like on actual UAVs or quads to get real results and more ways to improve. The system could be provided with a third ultrasound or infrared scanning device with an AI, which can easily detect the incoming deviations or barriers, which can prove fatal for the drone and prevent it from using the same algorithm on the third axis while staying on the path. Another major factor is wind speed since the drone has both horizontal and vertical speeds. Wind speed can be detected and used as an upward thrust or downward pull as used by bigger aerial vehicles like aeroplanes, jets, etc. There is also a limit of payload, so if this algorithm is used, it can be used to increase the payload. The real-time application of this idea and algorithm in the real world and also implementing the surrounding environment and speeds in the algorithm is the next step for the improvement of these UAVs as we see today. Additionally, knowledge of the topological structure of different places where the drone has to work can help to overcome some natural boundaries. The topological map of the place can provide some deep insight about the structure and landmarks of the place and help to escape some like tall grasses or higher altitude land spaces like mountains, thus making the drone moderate its flight accordingly and efficiently. Although our algorithm gives a perfect ascent and descent time, there might be circumstances or long journeys when these two cannot be followed immediately, thus a straight path is the best option as the quad will not be able to go any further in height and the final destination is still far away so the descent sequence still cannot be initialized. Thus, a way to identify and initiate the specific sequence according to the requirement can be developed that can cover the whole journey irrespective of its length or flight time. Some precautionary protocols and cyber security should be implemented to prevent harmful and fatal attacks on the system, so that it can be used in the real world. Funding: This research received no external funding. Data Availability Statement: The data reported in this study can be found in the sources cited in the reference list.
2021-12-02T16:59:47.406Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "1b26d7640eda966b2f533c7d1176e24a0c3e9738", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-5903/13/12/308/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "11808959f9357c830a1a1b630a1ffeee5382d9ca", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
260260799
pes2o/s2orc
v3-fos-license
Inpainting with Separable Mask Update Convolution Network Image inpainting is an active area of research in image processing that focuses on reconstructing damaged or missing parts of an image. The advent of deep learning has greatly advanced the field of image restoration in recent years. While there are many existing methods that can produce high-quality restoration results, they often struggle when dealing with images that have large missing areas, resulting in blurry and artifact-filled outcomes. This is primarily because of the presence of invalid information in the inpainting region, which interferes with the inpainting process. To tackle this challenge, the paper proposes a novel approach called separable mask update convolution. This technique automatically learns and updates the mask, which represents the missing area, to better control the influence of invalid information within the mask area on the restoration results. Furthermore, this convolution method reduces the number of network parameters and the size of the model. The paper also introduces a regional normalization technique that collaborates with separable mask update convolution layers for improved feature extraction, thereby enhancing the quality of the restored image. Experimental results demonstrate that the proposed method performs well in restoring images with large missing areas and outperforms state-of-the-art image inpainting methods significantly in terms of image quality. Introduction Image inpainting restores missing or damaged portions of an image, playing a crucial role in image coding and computational imaging. It fills in missing areas with plausible content, improving coding efficiency and fidelity. In computational imaging, it helps overcome challenges like occlusions and incomplete data, generating accurate scene representations. Overall, image inpainting facilitates efficient representation, transmission, and analysis of visual data, offering promising solutions for practical applications. This technique has been widely applied in various fields, including medicine, military, and video processing, among others [1][2][3][4][5][6]. The early image inpainting methods were mainly based on traditional image processing techniques, such as texture synthesis, patchbased methods, and exemplar-based methods. Texture synthesis methods [7] typically fill in missing areas by replicating or generating textures from the surrounding image regions. Patch-based methods [8] use similar patches from the non-missing areas to fill in the missing regions. Exemplar-based methods [9], on the other hand, utilize a set of exemplar images to complete the missing regions by finding the most similar patches or structures from the exemplars. However, these early methods often suffer from a limited capability to handle complex structures and to generate realistic textures, resulting in visible artifacts and inconsistencies in the inpainted regions. Recently, deep-learning-based approaches have emerged as the state-of-the-art for image inpainting, due to their ability to learn complex relationships and structures in the image data. These methods [10][11][12] typically involve using encoder-decoder networks to learn the context of the surrounding pixels, and then use this information to infer the missing content. Pathak et al. [13] were the first to apply convolutional neural networks to image restoration, and they designed a context encoder to capture the background information of images. Yang et al. [14] designed a dual-branch generator network, where one branch focuses on restoring the texture information of the image, while the other focuses on restoring the structural information, and then the results of the two branches are fused to improve the quality of the image restoration. Subsequently, due to the outstanding performance of generative adversarial networks in image restoration, many deep neural network architectures began to adopt adversarial learning strategies for image inpainting. For example, Yeh et al. [15] proposed an adversarial learning network consisting of a generator and discriminator, which can automatically generate high-quality restored images. The generator aims to generate complete images from the missing parts, while the discriminator is used to evaluate whether the generated results are similar to natural images. Iizuka et al. [16] proposed the concepts of the global discriminator and local discriminator. The global discriminator is used to detect the consistency of the overall image, while the local discriminator is used to detect the details and texture of local regions. The texture consistency of the restored results is ensured by evaluating the entire image and local regions. With the rapid development of deep learning, new technologies are constantly emerging in the field of image inpainting. For example, contextual attention mechanisms [17] can capture contextual information of different scales in the image, thereby improving the accuracy of image restoration. Partial convolution [18] only convolves known regions and ignores other missing parts, which can better handle missing areas. Gate convolution [19] can adaptively weight information from different positions to improve image restoration quality. Region normalization [20] can enhance the model's generalization ability, thus making the image restoration results more accurate and robust. However, it is challenging to model both the texture and structure of an image using a single shared framework. To effectively restore the structure and texture information of images, researchers, such as Guo et al. [21], have proposed a novel dual-stream network called CTSDG. This approach decomposes the image inpainting task into two subtasks, namely texture synthesis and structure reconstruction, further improving the performance of image restoration. This strategy allows for better handling of different feature requirements, resulting in enhanced quality and accuracy of the restored images. Furthermore, existing image inpainting techniques typically provide only a single restoration result. However, image inpainting is inherently an uncertain task, and its output should not be limited. To address this issue, Liu et al. [22] introduced a new approach based on the PD-GAN algorithm. They considered that the closer the hole is to the center, the higher its diversity and strength. By leveraging this idea, they achieved satisfactory restoration results. This method introduces more diversity and realism in the restoration outcomes, enabling better adaptation to different inpainting requirements. To address the restoration of boundary and high-texture regions, Wu et al. [23] proposed an end-to-end generative model method. They first used a local binary pattern (LBP) learning network based on the U-Net architecture to predict the structural information of the missing areas. Additionally, an upgraded spatial attention mechanism was introduced as a guide and incorporated into the image inpainting network. By applying these techniques, the algorithm aims to better restore the missing pixels in boundary and high-texture regions. The aforementioned deep learning-based inpainting methods rely on the encoderdecoder to infer the context of small missing image areas. They then infer the texture details of the missing area based on the image features of the non-missing area and use local pixel correlation to restore the damaged image area. However, when the missing area of the image becomes larger and the distance between unknown and known pixels increases, these methods can produce semantic ambiguity due to the weakening of pixel correlation. Additionally, due to the limitations of convolution kernel size and a single convolution layer, the range of extracted information is too small to capture global structural information from distant pixels. As a result, it is challenging to repair larger missing areas with more semantics directly in one step. Mou et al. [24] proposed a novel model called a deep generalized unfolded network (DGU-Net). This model integrates gradient estimation strategies into the steps of the gradient descent algorithm (PGD) to enhance the performance of the restoration process. However, it was not successful in handling large-area missing images. This indicates that there are indeed difficulties in effectively restoring images with extensive missing regions. Inspired by the human learning process, by first learning some simple tasks and then gradually increasing the difficulty of the task, this learning strategy, from easy to difficult, can gradually learn a better performance model. The pixels inside the region are easier to repair. Therefore, Zhang et al. [25] proposed another progressive repair method, which progresses from the border of the missing region to the center. However, the progressive repair method must update the feature map in each iteration mapping back to the RGB space, resulting in a high computational cost. In response to this problem, Li et al. [26] designed the RFR-Net model to perform progressive restoration at the image feature level. That is, the input and output of the model need to be in the same space representation, which greatly saves computational costs. However, the RFR-Net model only uses the learnable convolution kernel to perceive the edge of the damaged area, ignoring the context information outside the receptive field. There are still some problems with blurred boundaries and incorrect semantic content that lead to repair results. Aiming at the problem of huge amount of network parameters in image inpainting, we naturally think of optimizing the network structure and reducing unnecessary network layers. This paper uses the simplified encoder-decoder as the backbone of the generator. The end-to-end one-stage network dramatically reduces the complexity of the network compared to the progressive inpainting and multi-stage networks. Nevertheless, the cost of doing this is that the network may lose some ability to capture fine-grained texture details and global structural information, especially in large missing regions. In order to improve the restoration effect, this paper proposes a separable mask update convolution to reduce the interference caused by the missing regions in the image during the restoration process. This paper presents three main contributions in the field of image inpainting: • Lightweight end-to-end inpainting network: The paper introduces a novel lightweight end-to-end inpainting generative adversarial network. This network architecture, consisting of an encoder, decoder, and discriminator, addresses the complexity issue present in existing inpainting methods. It enables fast and efficient image restoration while maintaining high-quality inpainting results. The streamlined network design ensures computational efficiency and practicality; • Separable mask update convolution: The paper proposes a unique method called separable mask update convolution. By improving the specific gating mechanism, it enables automatic learning and updating of the mask distribution. This technique effectively filters out the impact of invalid information during the restoration process, leading to improved image restoration quality. Additionally, the adoption of deep separable convolution reduces the number of required parameters, significantly reducing model complexity and computational resource demands. As a result, the inpainting process becomes more efficient and feasible; • Superior inpainting performance: Experimental results demonstrate that the proposed inpainting network surpasses existing image inpainting methods in terms of both network parameters and inpainting quality. The innovative network architecture, coupled with the separable mask update convolution, achieves superior inpainting results with fewer parameters, reducing model complexity while maintaining highquality restorations. Attention Mechanism The attention mechanism can help the image inpainting model to find the most similar feature block from the non-missing area of the image according to the characteristics of the missing area, thereby improving the quality of image inpainting. Yu et al. [17] added an attention mechanism to the image inpainting network. The extracted feature information is divided into foreground and background areas, and the image feature blocks are matched in a long distance according to the similarity of the foreground and background. However, this image inpainting method ignores the correlation between the internal features of the missing area of the image. Therefore, Liu et al. [27] proposed a coherent semantic attention mechanism, which effectively improves the semantic consistency of the internal features in the missing area of the image. Since the features extracted by deep and shallow layers are not the same in convolutional neural networks, Zeng et al. [28] proposed a pyramidal context encoder network. The attention transfer network can transfer the attention information obtained from the high-level semantic features to the low-level features. This model is a restoration method that acts on the feature layer, which can improve the semantic consistency of the image after restoration. Literature [26] proposes a recurrent feature reasoning network, which works on the image feature level. In the process of feature reasoning, the designed knowledge consistent attention (KCA) module is added. The attention score determines the attention score of this module in the loop process, and the current attention score is jointly determined. This method can significantly save computational costs and achieve a more refined repair result. However, the features located in the missing area usually have a significant deviation, leading to the attention module's wrong attention allocation. Finally, the model fills in incorrect texture details for some missing areas. Phutke et al. [29] applied wavelet query multi-head attention to image inpainting. Wavelet query multi-head attention is an attention mechanism that combines wavelet transforms with multi-head attention. This allows the model to attend to information from different representation subspaces at different positions, improving its ability to capture long-range dependencies and complex relationships between the input and output sequences. Convolution Method Convolution is a fundamental mathematical operation in deep neural networks to extract essential features from input signals or images. It has revolutionized computer vision and is widely used in various deep learning tasks, including image classification, object detection, segmentation, and image inpainting. Researchers commonly used valid convolution for feature extraction in the early stages of applying deep learning to image inpainting. During this period, they mainly focused on studying the restoration of regular square-shaped missing regions in the center of the image, as in the work of Pathak et al. [13] and Yu et al. [17]. Since the missing regions were regular, their impact on the restoration results was relatively low during the convolution kernel sliding process. However, what needs to be restored is often irregular regions. In this case, feature extraction using valid convolutions suffers interference from missing regions. Because the convolution kernel will cover many mixed windows of effective areas and invalid areas during the sliding process, this can lead to inaccurate learned features and thus affect image restoration results. So, researchers began exploring using more advanced convolutional for image inpainting. The concept of partial convolution was first proposed by Liu et al. [18]. Partial convolution uses only valid pixels in the kernel to compute the output, ignoring invalid pixels (such as those in missing regions). This allows the convolution operation to focus on valid pixels, preventing missing regions from affecting the learned features. Partial convolution also has some limitations. One of the main limitations is that partial convolution is computationally expensive compared to regular convolution because it requires additional calculations to generate the mask. Additionally, partial convolution may not be suitable for cases where the missing regions occupy a large portion of the image because the valid pixels may not provide sufficient contextual information for restoration. Subsequently, in order to solve the problem that partial convolution cannot handle large areas of missing regions, gated convolution was proposed by Yu et al. [19]. Gated convolution is a variant of partial convolution that introduces an additional gating mechanism to control the flow of information through the convolutional kernel. The gating mechanism consists of a sigmoid function that generates a gating map to modulate the convolutional kernel's feature responses. The gating map is used to selectively pass through the valid pixels in the convolutional kernel and suppress the invalid pixels in the missing regions. Liu et al. [30] used part of the convolution kernel to process the structure and texture features of the image to generate feature information with different scales. Due to the excellent performance of gated convolution in repairing irregularly missing regions, Ma et al. [31] proposed an improved version called dense gate convolution. This method incorporates the idea of dense connections, which allows information to flow freely within the network, thereby enhancing feature propagation and utilization. Although the above convolution method solves the problem of large-area irregular mask image competition, there is still room for improvement in image restoration quality. Moreover, the problems of parameter expansion and increased calculation consumption caused by complex convolution methods have yet to be resolved but have intensified. Progressive Image Inpainting In view of the weak ability of convolutional neural networks in modeling long-distance pixel correlations between known long-distance regions (background regions) and regions to be inpainted (foreground regions), progressive image inpainting methods have been widely used in recent years. Xiong et al. [32] divided the whole inpainting task into three parts in sequence: perceiving the image foreground, completing object contours, and inpainting missing regions [33]. first predicted the structural information of the missing region of the image and then repaired the image according to the predicted structural information to improve the feature structure consistency between the repaired image and the real image. An excellent residual architecture in the full-resolution residual network proposed by Guo et al. [34] is helpful for feature integration and texture prediction. Furthermore, each residual block only reconstructs the specified missing regions to ensure image quality during the progressive inpainting process. Chen et al. [35] completed the image inpainting task step by step from the perspective of pyramid multi-resolution, during which low-resolution inpainting and high-resolution inpainting are performed in a cycle. Li et al. [36] stacked the visual structure reconstruction layer in the U-Net structure containing some convolutional layers. They reconstructed the structure and visual features of the missing area in a progressive manner. In this network, the updated structural information in each visual structure reconstruction layer is used to guide the filling of feature content to gradually reduce the missing area and finally complete the restoration task. Liao et al. [37] proposed a progressive image inpainting network that uses semantic segmentation information to constrain image content. However, these progressive image inpainting methods ignore the contextual information outside the receptive field of the convolution kernel. Shi et al. [38] proposed a multi-stage progressive inpainting method that divides the inpainting process into three stages: feature extraction, interactive inpainting, and reinforcement reconstruction. They used a dual-branch structure to focus on gradually restoring texture-level features. This approach avoids the redundant computation of previous cyclic progressive inpainting methods. Liu et al. [39] also used a dual-branch structure, but instead of having high-resolution and low-resolution branches, they focused on two progressive feature extraction branches for structure and texture feature extraction. This approach allows for the maximum restoration of the image's structure and texture information. GAN for Inpainting Generative adversarial networks (GAN) is a deep learning model consisting of a generator and a discriminator. It has been widely used in image inpainting. The generator takes an image with missing regions as input and generates a repaired image, while the discriminator tries to distinguish between the generated repaired image and the real image. Through adversarial training, the generator gradually learns to generate realistic repaired images that visually resemble the real images. The application of GAN in image inpainting has the advantage of generating natural and preserving structural and texture features in the repaired results. In recent years, researchers have proposed different GAN-based image inpainting methods. Guo et al. [21] proposed a novel two-stream network that models structure-constrained texture synthesis and texture-guided structure reconstruction in a coupled manner. The method, named "Conditional Texture and Structure Dual Generation (CTSDG)," incorporates a bi-directional gated feature fusion (Bi-GFF) module to exchange and combine structure and texture information, and a contextual feature aggregation (CFA) module to refine the generated contents using region affinity learning and multi-scale feature aggregation. Li et al. [40] introduced a novel multi-level interactive Siamese filtering (MISF) technique that combines image-level predictive filtering and semantic filtering on deep feature levels. Their method contains two branches: a kernel prediction branch (KPB) and a semantic and image filtering branch (SIFB). These branches interactively exchange information, with SIFB providing multi-level features for KPB and KPB predicting dynamic kernels for SIFB. This method leverages effective semantic and image-level filling for high-fidelity inpainting and enhances generalization across scenes. Chen et al. [3] proposed a feature fusion and two-step inpainting approach (FFTI). The method utilizes dynamic memory networks (DMN+) to fuse external and internal features of the incomplete image and generate an incomplete image optimization map. A generation countermeasure generative network with gradient penalty constraints is constructed to guide the rough repair of the optimized incomplete image and obtain a rough repair map. Finally, the coarse repair graph is optimized using the coherence of relevant features to obtain the final fine repair graph. Xia et al. [41] proposed an effective image inpainting method called a repair network and optimization network (RNON), which utilizes two mutually independent generative adversarial networks (GANs). The image repair network module focuses on repairing irregular missing areas using a generator based on a partial convolutional network, while the image optimization network module aims to solve local chromatic aberration using a generator based on deep residual networks. The synergy between these two network modules improves the visual effect and image quality of the inpainted images. Although these methods have made significant progress in image texture and structure restoration, they also have certain limitations. The multi-stage, multi-branch, and multi-network nature of these methods leads to increased computational resource consumption and longer computation time. CTSDG, while advantageous in coupled texture synthesis and structure reconstruction, may have limitations when dealing with large-scale corruptions or missing regions spanning important areas of the image. The computational complexity of the Bi-GFF and CFA modules may restrict real-time performance in certain applications. MISF, which focuses on semantic filtering rather than fine-grained texture reconstruction, may experience a sharp performance decline when dealing with large missing areas requiring detailed texture recovery. The effectiveness of FFTI may depend on the quality and completeness of the external and internal features used for fusion, making it susceptible to interference from irrelevant information. RNON, which utilizes two independent GANs, requires more computational resources and time for training and faces an increased risk of mode collapse. The method's repaired results also tend to exhibit over-smoothing. Approach Like a person completing a jigsaw puzzle alone, the image inpainting algorithm fills in the missing area by piecing together the surrounding pixels and keeping the contextual semantic information consistent with the image structure during the filling process. However, in the related work mentioned above, although progressive inpainting uses surrounding pixels to progressively restore missing pixels, they cannot maintain contextual image semantics and structural information well. At the same time, these algorithms have a large number of parameters and are computationally expensive. Therefore, this paper proposes a U-Net-like codec network with separable mask update convolution, significantly reducing network complexity. Network Structure The proposed SMUC-net in this paper is a deep learning-based image restoration model whose backbone is a codec serving as the generator. Including components such as a discriminator, encoder, and decoder constructs an end-to-end learning framework that allows the entire restoration process to be completed within a unified framework. The overall structure of the SMUC-net is shown in Figure 1. Specifically, the encoder adopts separable mask update convolution modules and region normalization modules, which can effectively extract image feature information and optimize computation efficiency. Next, the image feature vector undergoes processing through eight residual blocks, which can effectively increase the depth and flexibility of the model while avoiding the problem of gradient vanishing. Finally, the decoder consists of separable mask update convolution layers and region normalization modules and uses the tanh function to activate the output result, obtaining the restored image. In SMUC-net, the discriminator evaluates the similarity between the generated restored and original images, providing feedback mechanisms for model optimization. Among them, the loss function is the core of model optimization. This paper adopts multiple loss functions such as L1 loss, adversarial loss, perceptual loss, and style loss to train the model(the loss function is consistent with [20]). The L1 loss measures the pixel-level distance between the restored and original images, while the adversarial loss encourages the generator to produce more realistic restored images. The perceptual loss ensures the perceptual quality of the restored image by comparing the feature representations of the restored image and the original image. The style loss can make the restored image better conform to the style characteristics. In recent years, deep learning has dramatically advanced the field of image restoration due to its remarkable performance in various applications. However, the challenges remain since the image restoration problem is inherently complex and challenging. SMUC-net represents a significant step forward in addressing these challenges, providing a powerful tool for restoring damaged or missing image information. Moreover, combining various loss functions and using a unified end-to-end learning framework in SMUC-net further enhance its effectiveness and versatility. Future research can explore improving the efficiency and accuracy of image restoration models like SMUC-net, extending their applicability to even more complex scenarios. Separable Mask Update Convolution Modules For irregular image inpainting tasks, missing regions may have arbitrary shapes and sizes, which means that traditional vanilla convolution-based inpainting algorithms are often incompetent. The following is the operation formula of vanilla convolution: Among them, f is the input image. g is the filter. k is the radius of the filter. (x, y) is the pixel position of the output image. The indices i and j represent the spatial location within the kernel matrix. This formula says that each pixel value in the output image is the weighted sum of the filter at that location and surrounding pixels. It is not difficult to see from the formula that for the coordinate point position (x, y) of each channel of the input image f , the ordinary convolution will use the same shape of the filter g to perform convolution operations on it. This is because in ordinary convolution, the parameters of the filter are fixed independent of the pixel values in the input image. Therefore, no matter what the pixel values in the input image are, the same filter will be used for convolution. However, in tasks such as irregular image repair, since the missing area may have arbitrary shape and size, while the filter used by ordinary convolution is fixed, it is difficult to adapt to missing areas of different shapes and sizes, resulting in poor repair results. In order to solve the above problems, it is necessary to control the interference of invalid information in the missing area of the image on the convolution result. Therefore, Liu et al. [18] proposed the concept of partial convolution. The calculation process of partial convolution needs a binary mask to assist. This mask consists only of 0 and 1. The mask position corresponding to the position where the pixel is 0 in the input image is also 0, and the corresponding mask position is 1 in other cases. Partial convolution uses a mask map to mark the areas of the input image that contain missing pixels. During convolution, only the valid pixel-containing areas are used for the convolution operation, while the invalid areas are excluded. This ensures that the missing areas do not interfere with the convolution result. Furthermore, a new mask image is generated during partial convolution, which guides the convolution operation of the subsequent layers. The operation flow of partial convolution is shown on the left side of Figure 2. Firstly, partial convolution is used to perform convolutional calculations on the input image, resulting in a new feature map. During this process, only the areas containing valid pixels are involved in the calculation, while the missing areas are excluded to avoid interfering with the convolution result. Then, the updated mask is used to perform a dot product with the feature map to obtain the input of the next layer in the network. Meanwhile, the updated mask is also used as the input for the next layer to guide the subsequent convolution operation. The partial convolution operation formula and mask update rules can be expressed as follows: w represents the weight of the current convolution filter. x is the characteristic value of the current sliding window. denotes element-wise multiplication. m represents the binary mask used for marking. (x, y) is the pixel position of the mask. After each partial convolution, mask-update follows the following strategy: Conv. The process of updating the mask is illustrated in Figure 3. When the mask region covered by the current convolutional window contains valid pixels, the corresponding region in the updated mask is marked as 1. However, the current masking method has a problem of considering all pixels as either valid or invalid, without considering the number of valid pixels. Therefore, in some cases, regions with only one valid pixel and regions with nine valid pixels are considered to have the same value. This is obviously unreasonable, especially as the network gets deeper, the actual information carried by regions of valid pixels can be very limited, as shown in the green region in the figure. The red area contains more effective pixels. Based on the concept mentioned above, Yu et al. [19] proposed a novel approach to update the mask in the feature maps. They classified the feature maps into two groups based on the number of channels. They applied either sigmoid activation or relu activation on each group, as depicted on the right side of Figure 2. Specifically, the sigmoid activation operation was named GATE. It generated a weight map, also known as a 'soft mask', through which the pixels with lower weight values were deemed to have a higher probability of containing invalid information. The soft mask was then multiplied with the activated feature map to reduce the confidence of invalid pixels in the feature map, thereby allowing the network to focus on the most informative regions. This process improved the quality of feature representations and helped eliminate the negative influence of noise and irrelevant information. The gate convolution proposed by Yu et al. [19] is undoubtedly effective in improving the quality of feature representations. However, this approach also presents two notable challenges that need to be addressed. Firstly, the feature maps are split into two groups, which require twice as many convolution kernels and double the number of parameters in the model. This operation increases the computational cost and storage requirements, making deploying the model on resource-limited devices challenging. Secondly, the equal division of feature maps may reduce the feature space available for learning and limit the expressive power of the model. This paper proposed a novel approach called separable mask update convolution to overcome these limitations. This approach addresses the challenges mentioned above by introducing a two-step process separating the convolutional and gating operations. Specifically, the separable mask update convolution first applies a regular convolutional operation to the input feature map, generating a set of intermediate feature maps. Then, a gating operation is performed on the intermediate feature maps to obtain a set of weight maps. By separating the two operations, the separable mask update convolution can reduce the number of convolution kernels and parameters in the model while achieving similar or even better performance than the original gate convolution approach. Moreover, the separable mask update convolution approach allows more flexibility in designing the model architecture and improves the model's ability to learn complex representations. Based on the operation principle shown in Figure 4, the separable mask update convolution method follows a few steps. Firstly, the group parameter of the convolution kernels is set to be the same as the number of input channels, which results in an equal number of output feature maps as the input channels. Let us assume the number of input channels is N c . Therefore, N c feature maps are generated. Next, the N c feature maps are divided into two groups with a proportion of N c − 1 : 1. The relu function activates the first group. In contrast, the sigmoid function activates the second group. The reason for using two different activation functions is to provide diverse nonlinear transformations to the feature maps. After activation, the two groups are multiplied to obtain the weighted feature maps. Finally, the weighted feature maps are passed through the convolutional layer, which consists of filters with a kernel size equal to 1. This convolutional layer helps to expand the output channels and generates a new set of feature maps with increased depth. By using the separable mask update convolution method, the model can learn more complex representations with fewer parameters, resulting in better performance and faster convergence during training. The convolution method proposed in this paper can significantly reduce the number of parameters compared to gate convolution. It also optimizes the number of feature maps required for mask update and improves the information utilization rate. Below are the formulas for calculating parameters in gated convolution and separable mask update convolution. The calculation of parameters for gate convolution [19] can be expressed as: The calculation of parameters for separable mask update convolution can be expressed as: The formulas show that the number of parameters required for the two convolution methods, gate convolution (GC) and separable mask update convolution (SMUC). The variables N gc and N smuc respectively represent the number of parameters required for each method. The kernel size is denoted by K size , while N in and N out represent the number of input and output channels. It can be observed that when K size = 1, SMUC has more parameters than GC due to the presence of an additional N in term. However, for larger kernel sizes, SMUC requires fewer parameters than GC. Region Normalization The separable mask update convolution effectively reduces the impact of invalid information in the missing region on the restoration results during the convolution process. In the normalization layer, regular normalization methods cannot completely avoid the interference of invalid information in the missing region on the restoration results, especially when the missing region is large. Traditional normalization methods usually normalize the pixel values of each feature map to reduce the covariance between feature maps, thereby enhancing the robustness and generalization ability of the network. However, for the missing region, as the pixel values in the missing region are usually zero or very small, such normalization methods often cannot effectively reduce the invalid information in the missing region but may increase the impact of noise, further affecting the quality of the restoration results. To solve this problem, some unique normalization methods for the missing region have emerged in recent years, such as the normalization method based on local variance and the normalization method based on masks. These methods can better remove invalid information in the missing region through special processing of the missing region, thereby improving the quality of the restoration results. In this paper, a novel technique called Region Normalization (RN) [20] is introduced to address the challenge of mean and variance shifts in the normalization process. The method is specifically designed for use in the early layers of the inpainting network, where the input feature contains large corrupted regions that result in significant mean and variance shifts. RN addresses this issue by separately normalizing and transforming the corrupted and uncorrupted regions, thus preventing information mixing. In specific operations, the region normalization method divides each input feature map according to its four dimensions (N, C, H, W). Then it divides it into damaged and undamaged regions based on whether there is damaged data in each region. As shown in Figure 5, the height and width of each feature map in the batch can be divided into multiple block regions. The green area represents the damaged data, and the blue area represents the undamaged data, which are normalized separately. Loss Function When performing image restoration, the loss function is a crucial part. By defining an appropriate loss function, we can guide the model to learn how to better restore the image. In this paper, various loss functions are adopted for image inpainting, including perceptual loss, reconstruction loss, adversarial loss and style loss. These four loss functions correspond to the reconstruction error, feature similarity, adversariality, and style similarity between the inpainted image and the real image, respectively. The total loss of the generator can be expressed as a weighted sum of these four loss functions, where each loss function has its own weight. The following are the specific formulas of reconstruction loss, perceptual loss, confrontation loss and style loss: L adv (G, D) = log D(y) + log(1 − D(G(z))) (8) Among them,ŷ is the image generated by the generator, y is the real image, φ i represents the feature representation of the i-th layer in the pre-training model, specifically pool1, pool2, and pool3 layers. Therefore, in this paper, L = 3, N i represents the number of elements in φ i , D is the discriminator, G is the generator, z is the input image, and Gram represents the Gram matrix. The total loss is then expressed as: The weight coefficients α, β, γ, δ are used to control the contribution of each loss function to the overall loss. The total loss function includes four losses, which constrain the generator from different perspectives and effectively improve the quality and effectiveness of image restoration. In this paper, α, β, γ, and δ are set to 1, 0.1, 0.1, and 250, respectively. Experiment Setup For our experiments, we chose to use three common datasets for image inpainting, namely Paris Street View [13], Celeba-HQ [42] and Places2 [43]. The Paris Street View dataset, proposed by researchers from ParisTech and INRIA (French National Institute for Research in Computer Science and Automation), consists of 15,000 high-resolution images capturing street views and buildings in Paris. This dataset is commonly used in research on scene understanding, image inpainting, and image synthesis in computer vision and image processing tasks. The Celeba-HQ dataset, introduced by Ziwei Liu et al. in 2018, is an extension of the Celeba (Celebrities Attributes) dataset with higher quality images. It contains 30,000 high-quality images of celebrity faces that have been carefully curated and processed. The Celeba-HQ dataset is widely used for training and evaluating computer vision algorithms related to face recognition, face generation, and face inpainting tasks. The Places2 dataset is a widely used large-scale image dataset proposed by researchers from the International Computer Science Institute (ICSI) and the Berkeley Vision and Learning Center (BVLC). It comprises over one million carefully curated images, capturing diverse real-world scenes such as indoor and outdoor environments, natural landscapes, urban street views, and office spaces. These high-quality images, with resolutions of 256 × 256 or 512 × 512 pixels, exhibit rich semantic and visual diversity, covering various scene types, lighting conditions, and perspectives. The Places2 dataset serves as a vital benchmark for scene understanding, image generation, image classification, and other computer vision tasks, enabling researchers to train and evaluate their models and driving advancements in the field. In the study, the first 14,900 images from the Paris Street View dataset were used for training the model, while 100 images were reserved for testing. As for the Celeba-HQ dataset, the first 28,000 images were used for training, and the remaining 2000 images were used for testing. This data split ensures that the model is exposed to representative image samples during both training and testing, enabling accurate assessment of the performance and generalization capability of the image inpainting algorithm across different datasets. For the Places2 dataset, we followed the official partitioning of training and testing sets. We created our own training set by selecting the first 100,000 images from the complete Places2 training set. Similarly, our testing set was formed by selecting the first 2000 images from the complete Places2 testing set. To create the masks for our experiments, we used the irregular mask dataset [18], which consists of a variety of randomly generated masks with different shapes and sizes. Liu et al. created and released the dataset of irregular masks when they proposed partial convolution. It has become one of the most widely used public datasets for irregular mask image inpainting among existing image restoration methods. We divided the masks into five categories based on the proportion of missing area, namely 10-20%, 20-30%, 30-40%, 40-50%, and 50-60%. Each category contained 2000 masks. To train our model, we used a single NVIDIA GeForce GTX 1080Ti graphics card and set the number of epochs to 10. We continued training until the model converged and achieved satisfactory results on our test dataset. Quantitative Comparison In addition to describing the proposed method, this paper also compared it with other commonly used image restoration methods, including Region Normalization (RN) [20], Conditional Texture and Structure Dual Generation (CTSDG) [21], and Multi-level Interactive Siamese Filtering (MISF) [40]. These comparison methods have shown good performance in recent years in the field of image restoration. The comparison was carried out on two datasets, Celeba-HQ and Paris Street View, and the test results were presented in Tables 1 and 2, respectively. The two metrics used to evaluate the performance were PSNR and SSIM, which reflect the pixel similarity and structural similarity between the inpainting results and the original image, respectively. [40], repair network and optimization network (RNON) [41], and features fusion and two-steps inpainting (FFTI) [3] methods on the comprehensive Places2 dataset. In this comparative experiment, we excluded the experiments on the extreme conditions of extremely small and extremely large missing areas. The experimental results were focused on the common range of 20% to 50% missing regions. The experimental results are presented in Table 3. The results show that the proposed method outperformed the comparison methods in terms of PSNR and SSIM on both datasets, especially in the 10% to 60% range of missing area. For example, on Celeba-HQ, the proposed method achieved a PSNR improvement of 1.06-1.6 dB and an SSIM improvement of 0.026-0.127, depending on the scale of the missing area. On Paris Street View, the PSNR improvement was 0.827-1.69 dB and the SSIM improvement was 0.019-0.137. These results indicate that the proposed method can recover more structural information, especially when repairing large missing areas. On the comprehensive Places2 dataset, our proposed method continues to exhibit a distinct advantage over recent state-of-the-art approaches in the restoration of large-scale missing image regions. Particularly for missing areas exceeding 30%, as the extent of the missing region increases, our method consistently outperforms other techniques in terms of both PSNR and SSIM metrics. Furthermore, in the restoration of small-scale missing regions (below 30%), although our method may not surpass the performance of the highly effective FFIT method, the discrepancy in PSNR values between the two approaches remains minimal. Table 3. This is a demonstration of the quantitative results of the method in this paper at different defect scales on the Places2 dataset (M = 2 20 ). Furthermore, the proposed method has fewer parameters compared to the RN method, which is a large-scale network with the fewest parameters. Additionally, when compared to the RNON method, our method only has one-third of the parameter count. In comparison to the FFIF method, the proposed approach reduces the number of parameters by nearly two-thirds. Hence, the proposed method not only demonstrates better performance but also exhibits a more compact structure. Figures 6 and 7 show the restoration results of two datasets under different missing ratios. The results demonstrate that our proposed method outperforms other methods in restoration effectiveness at any scale. It is worth noting that the performance of the existing methods and our proposed method vary when dealing with different types of images. For example, on the CelebA-HQ dataset, which consists of facial images, the SMUC-net produces more natural-looking results with softer facial contours. In contrast, the other methods tend to generate slightly more artificial-looking images. This indicates that our proposed method is more effective in preserving the natural features of facial images. In contrast, on the Paris Street View dataset, which includes a variety of urban scenes, our method performs better in restoring texture details of objects such as branches and windows. This is due to our method's ability to recover the structural information of the missing areas, which is crucial in restoring the texture details of objects. Qualitative Comparison Another significant advantage of our method is its ability to handle large missing areas. As demonstrated in Figure 7, our method can effectively recover the structural information of the missing areas and generate more realistic images than other methods, even when up to 60% of the image is missing. As shown in Figure 8, on the Places2 dataset, it is easy to observe that our proposed restoration method outperforms other methods in terms of preserving more detailed information in large-scale irregular missing regions. Notably, it effectively restores finer details such as the texture on buildings or the intricate details of a child riding a toy horse. However, it should be noted that our method may still encounter challenges in restoring certain types of objects, such as text or other highly structured elements. In such cases, our method may be unable to repair them successfully. Nevertheless, our method produces fewer artifacts and more realistic results compared to other methods. In summary, our proposed method outperforms existing methods in restoring natural features and texture details, handling large missing areas, and producing more realistic results with fewer artifacts. These findings demonstrate the potential of our method for various applications, such as image editing and restoration. Ablation Study To validate the effectiveness of the proposed method in this paper, we conducted ablation experiments on the CelebA dataset. The mask used in this case is based on a general damage range of 30% to 40%. Following the proposed repair method, we restored separable mask update convolutions to ordinary convolutions and normalized the region normalization layer to a standard normalization layer as the base model. Then, we separately added separable mask update convolutions and region normalization layer for training and finally added both methods to the base model for training. The final model we obtained is the proposed repair method in this paper. The experimental results are shown in Table 4. After conducting experimental comparisons, we see that both replacing separable mask update convolution and region normalization layers can significantly improve the restoration performance compared to the original basic network architecture. This indicates that both methods are effective in improving restoration performance. Because both separable mask update convolutions and region normalization can reduce the interference of invalid pixels in the damaged area to some extent, they contribute to improving the repair results. It is worth noting that the core separable mask update convolution module proposed in this paper also plays an essential role in reducing network parameters. Ultimately, our experimental results show that the proposed SMUC-net achieves the best results in both restoration performance and network parameter count. Conclusions This article proposes a simple encoder-decoder network that combines separable mask update convolutions and region normalization techniques to improve image restoration. The network parameters are significantly reduced using separable mask update convolutions instead of traditional convolution operations. Additionally, the separable mask update mechanism can preserve more feature information and reduce the impact of invalid pixels by providing different weights to masked and unmasked areas, further enhancing the restoration effect. Furthermore, the article introduces the region normalization technique to provide different means and variances for masked and unmasked areas. This method can reduce the influence of masked areas on the restoration results, thereby improving the accuracy of image restoration. Through experimental comparisons, we found that the proposed method achieved a good restoration effect and network parameter quantity results. Experimental results on the Celeba-HQ and Paris Street View datasets show that our proposed method outperforms FFTI by 1.06-1.6 dB and 0.827-1.69 dB in terms of PSNR and by 2.6% to 12.7% and 1.9% to 13.7% in terms of SSIM under damage rates of 10% to 60%. Moreover, our method successfully reduces the parameter quantity by 16.58 M, making it the model with the minor parameters but the best restoration results. The image inpainting method proposed in this paper has achieved significant improvements in terms of network parameters and inpainting quality. However, the main limitation of our approach is it lacks interactivity. A possible future direction could be to incorporate user guidance information into the inpainting process, which may provide more opportunities for user participation and customization. In addition, robot painting [44,45] is also a promising application direction. In practical applications, our image inpainting method can assist robots in better filling in missing image content.
2023-07-29T15:08:59.689Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "b462234e5c4a35bed1d0e996bb8384696a126ecd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "696377faf7b00bf07f9dea210a44d30f088ee462", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
13739141
pes2o/s2orc
v3-fos-license
Assessment of a short phylogenetic marker based on comparisons of 3 ' end 16 S rDNA and 5 ' end 16 S-23 S ITS nucleotide sequences on the genus Xanthomonas A short phylogenetic marker previously used in the reconstruction of the Class γ-proteobacteria was assessed here at a lower taxa level, species in the genus Xanthomonas. This maker is 224 nucleotides in length. It is a combination of a 157 nucleotide sequence at the 3' end of the 16S rRNA gene and a 67 nucleotide sequence at the 5' end of the 16S-23S ITS sequence. A total of 23 Xanthomonas species were analyzed. Species from the phylogenetically related genera Xylella and Stenotrophomonas were included for comparison purposes. A bootstrapped neighborjoining phylogenetic tree was inferred from comparative analyses of the 224 bp nucleotide sequence of all 30 bacterial strains under study. Four major Groups were revealed based on the topology of the neighbor-joining tree, Group I to IV. Group I and II contained the genera Stenotrophomonas and Xylella, respectively. Group III included five Xanthomonas species: X. theicola, X. sacchari, X. albineans, X. transluscens and X. hyacinthi. This group of Xanthomonas species is often referred to as the hyacinthi group. Group IV contained the other 18 Xanthomonas species. The overall topology of the neighbor-joining tree was in agreement with currently accepted phylogenetic. The short phylogenetic marker used here could resolve species from three different Xanthomonadacea genera: Stenotrophomonas, Xylella and Xanthomonas. At the level of the Xanthomonas genus, distant species could be distinguished, and whereas some closely-related species could be distinguished, others were undistinguishable. Pathovars could not be distinguished. We have met the resolving limit of this marker: pathovars and very closely related species from same genus. INTRODUCTION The genus Xanthomonas comprises 27 species.These species are primarily characterized by the production of xanthomonadins, a water-insoluble yellow pigment, and the production of an exo-polysaccharide, the xanthan gum, which is used as a thickening, stabilizing and gelling agent in food, pharmaceutics, cosmetics and oil industries [1,2].Most Xanthomonas species are plant pathogens [3].They cause diseases on several economically important plants including crucifers, Solanaceae, citrus, cotton, cereals, ornamentals, fruit and nut trees [3,4].It is estimated that at least 124 monocotyledons and 268 dicotyledons are infected by Xanthomonas species [4][5][6]. In a recent study [26], a short 232 bp nucleotide sequence "marker" was used to reconstruct the phylogeny of the Class γ-proteobacteria.This 232 bp marker was a combination of the last 157 bp at the 3' end of the 16S rRNA gene and the first 75 bp at the 5' end of the 16S-23S rRNA internal transcribed spacer (ITS).We showed that the 157 bp sequence was highly conserved among closely related species.Owing to its higher rate of nucleotide substitutions, the 75 bp added discriminating power among species from same genus and closely related genera from same family.This marker could reconstruct the phylogeny of the species, genera, families and Orders within the Class γ-proteobacteria in accordance with the accepted classification. In the current study, we further assess the resolving power of this marker at a much lower taxa level: species within the genus Xanthomonas. Bacterial Species and Strains A total of 25 Xanthomonas strains from 23 species were analyzed.Four Xylella fastidiosa strains and one Stenotrophomonas maltophila stain were added for comparison purposes.They were selected on the basis that their complete genome sequences were freely available in GenBank, at the National Center for Biotechnology Information (NCBI) completed microbial genomes database (http://www.ncbi.nlm.nih.gov/genomes/lproks.cgi,August 2009).All bacterial strains and their GenBank accession numbers are listed in Table 1. Sequences Analysis First, the 16S rRNA and the 16S-23S ITS sequences of the 30 bacterial strains under study were retrieved from GenBank.Second, the 16S rRNA gene nucleotide sequences were aligned using ClustalW [27] (data not shown).The length of the nucleotide sequence most conserved was determined at 157 bp.Third, the 16S-23S ITS were aligned using ClustalW (data not shown).The length of the nucleotide sequence most conserved was determined at 67 bp.These two most conserved nucleotide sequences, the 157 bp at the 3' end of 16S, and the 67 bp at the 5' end of 16S-23S ITS, were combined into a single 224 bp sequence for each bacterial species and strain under study.This 224 bp sequence will be used here as a phylogenetic marker for the Xanthomonas species and related genera under study. Phylogenetic Analyses A neighbor-joining tree was constructed [28] based on the alignment of the 224 bp sequence of the 30 bacterial strains under study.The tree was bootstrapped using 1,000 random samples of sites from the alignment, all using CLUSTAL W software [27] at the DNA Data Bank of Japan (DDBJ) (http://clustalw.ddbj.nig.ac.jp/tope.html),with the Kimura's parameter method [29].The neighborjoining tree was drawn using TreeView (version 1.6.6)[30,31]. RESULTS AND DISCUSSION A bootstrapped neighbor-joining tree based was inferred from the alignment of the 224 bp sequence of all 25 Xanthomonas species and pathovars, four Xylella fastidiosa strains and Stenotrophomonas maltophila under study (Figure 1).Four Groups, Group I to IV, were revealed at the 95% nucleotide sequence identities.Group I contains Stenotrophomonas maltophila.Group II encompasses all four Xylella fastidiosa strains.They share 99% nucleotide sequence identities.Group III includes five Xanthomonas species: X. theicola, X. sacchari, X. albineans, X. transluscens and X. hyacinthi.They share at least 96% nucleotide sequence identities.This group of Xanthomonas species is often referred to as the hyacinthi group [15].Our results are in agreement with the first identification of the hyacinthi group based on the homology of their 16S rRNA [13], 16S-23S ITS [15] and gyrB nucleotide sequences [17] and MLSA [18].Group IV contains 18 Xanthomonas species.These species share at least 95% nucleotide sequence identities.Six species can be distinguished: X. axonopodis, X. codiaei, X. fragariae, X. campestris, X. cassavae and X. melonis.Xanthomonas perforans, X. euvesicatoria and X. alfalfae are grouped together and appear undistinguishable.These species share 100% nucleotide sequence identities.The grouping of these six species is in agreement with the work of Parkinson et al. [18] based on comparison of gyrase B gene sequences.Furthermore, the grouping of X. perforans, X. euvesicatoria and X. alfalfae is in agreement with the work of Young et al. [18] based on MLSA.Xanthomonas hortorum and X. vasicola, and X. oryzae and X. bromi are grouped together, respectively, and appear undistinguishable.Both pair of species share 99% and 100% nucleotide sequence identities, respectively.Five other Xanthomonas species, X. gardneri, X. vesicatoria, X. cucurbitae, X. arboricola and X. pisi are grouped together and appear undistinguishable.These species share 100% nucleotide sequence identities.The three X.campestris strains appear undistiguishable.They share 100% nucleotide sequence identities.The three X.campestris strains appear un-distiguishable.They share 100% nucleotide sequence identities. Of the 23 Xanthomonas species under study, 15 species or group of species could be distinguished by the 224 bp sequence used as marker.Very closely related species, such as those in Group IV could not be dinstinguished.Pathovars could not be distinguished, as exemplified by the three X.campestris pathovars.The overall topology of the neighbor-joining tree was, however, in agreement with phylogenetic trees based on the 16S rRNA [13] and the 16S-23S ITS [15]. In previous studies, we showed that a DNA sequence from 3' end 16S rRNA gene and 5' end 16S-23S ITS could be used as a marker in the reconstruction of phylogenies in the Gram-positive genus Bacillus and closely-related genera [32], the Gram-positive Order Bacillales [33], and the Gram-negative Class γ-proteobacteria [26].This maker ranged in size from 220 bp to 232 bp.It contained 150-157 bp from the 3' end of the 16S rRNA gene and 67-75 bp from the 5' end of the 16S-23S ITS.The 150-157 bp from the 3' end of the 16S rRNA gene was often sufficient to distinguish bacterial Orders, families, and species from different genera.This sequence was, however, highly conserved among closely related species.Owing to its higher rate of nucleotide substitutions, the 67-75 bp from the 5' end of the 16S-23S ITS added resolving power among closely related species from same genus.This marker had proven useful in reconstructing the phylogenies of the genus Bacillus and closely-related genera [32], the Order Bacillales [33] and the Class γ-proteobacteria [26] in accordance with accepted phylogenies inferred from much more comprehensive datasets.This marker presented several advantages over the use of the entire 16S rRNA gene or the 16S-23S ITS or the generation of extensive phenotypic and genotypic data in phylogenetic analyses.We showed that the method was simple, rapid, suited to large screening programmes and easily accessible to most laboratories.It also proved very valuable in revealing bacterial species which appeared misassigned and for which additional characterization appeared warranted.The resolving power of this marker has been further analyzed here in a much deeper branch of the Class γ-proteobacteria: the genus Xanthomonas.As expected, we have shown here that this marker could resolve species from three different Xanthomonadacea genera: Stenotrophomonas, Xylella and Xanthomonas.At the level of the Xanthomonas genus, distant species could be distinguished.However, although some closely-related species could be distinguished, others were grouped together and some were undistinguishable.Clearly, pathovars could not be distinguished.We have met the resolving limit of this marker: pathovars or very closely-related species. CONCLUSION A short DNA marker based on 3' end 16S rDNA and 5' end ITS, had been shown previously to be able to reconstruct the phylogeny of the Class γ-proteobacteria at the Orders, families, genera and distantly-related species levels This marker was analyzed here at a lower taxa level.First, we have shown that this marker could cluster species from same genera within the family Xanthomonadacea.Next, at the genus Xanthomonas level, we have shown that although the short DNA marker could distinguish several species, very closely-related species and pathovars could not be distinguished.We have reached the limit of the resolving power of the 224 bp sequence as a phylogenetic marker: very closely-related species and pathovars. Figure 1 . Figure 1.Bootstrapped neighbor-joining tree of the genus Xanthomonas species inferred from the alignment of the 224 bp marker. Table 1 . Bacterial species used in this study.
2018-05-04T17:31:56.195Z
2010-12-28T00:00:00.000
{ "year": 2010, "sha1": "6b0c9ca8b0f1ec189976eb6f737fddb14fa74565", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=3421", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "6b0c9ca8b0f1ec189976eb6f737fddb14fa74565", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
233605577
pes2o/s2orc
v3-fos-license
EFFECT OF LIGHTING SCHEDULE, INTENSITY AND COLOUR ON REPRODUCTIVE PERFORMANCE OF RABBIT DOES In order to establish a lighting regime suitable for rabbit farms in East China, the effects of lighting schedule, intensity and colour on the reproductive performance of rabbit does were evaluated by three experiments, respectively. In experiment 1, does were exposed to different lighting schedules: 16L:8D-continuous, 16L:8D-18d (6 d before artificial insemination (AI) to 12 d post-AI), 16L:8D-6d (6 d before AI to the day of AI) and 12L:12D-continuous. In experiment 2, does were exposed to different light intensities: 40 lx, 60 lx, 80 lx and 120 lx. In experiment 3, does were exposed to different light colours: white, yellow, blue and red. For all experiments, conception rate, kindling rate and pre-weaning mortality were calculated; litter size at birth, litter weight at birth, litter size at weaning, litter weight at weaning and individual kit weight at weaning were recorded. Results showed that none of the reproductive parameters of does were affected by the application of 16L:8D-18d lighting schedule compared with the continuous 16L:8D group (P>0.05). Moreover, rabbits does exposed to 80 lx light performed as well as those under 120 lx light in conception rate, kindling rate, litter size (total and alive) at birth and litter weight at birth (P>0.05). Furthermore, the exposures of 60 lx and 80 lx light were beneficial for litter weight at weaning. In addition, red light had a positive effect, as it led to a larger litter size and litter weight at weaning and lower pre-weaning mortality than white light (P<0.05). In summary, a 16L:8D photoperiod with 80 lx red light from 6 d before AI to 12 d post-AI is recommended for use in breeding of rabbit does according to our results. INTRODUCTION The lighting regime is applied for different purposes on rabbit farms, improving reproductive performance, increasing daily weight or improving fur quality and wool production . Although the natural light effect can be reduced or eliminated in rabbit barns, there may still be regional differences in the reproductive performance of female rabbits, which are related to climate, feed nutrition and management. In China, rabbits are mainly raised in Shandong, Henan and Sichuan provinces, and Shandong is located in East China. The annual average temperature is 10-16°C, the annual average amount of rain is 676 mm (56 d) and annual sunshine hours are 2491 h. However, the rabbit farms have adopted the lighting scheme used in Western European. We think it is necessary to determine if there is a more suitable lighting regime for them. Since the 1980s, the prolongation of daily lighting duration has been shown to have a positive effect on oestrus and the reproductive performance of female rabbits (Maertens and Luzi, 1995;Mattaraia et al., 2005). Mattaraia et al. (2005) found that a supplemental lighting schedule providing 14 h light (L)/10 h darkness (D) favoured fertility of primiparous does; thus, they recommended the adoption of such a schedule to increase productivity. Theau-Clement et al. (2008) provided evidence that the receptivity rate of rabbit does was higher under a 16L-8D photoperiod than under an 8L-16D photoperiod. Moreover, the practice in rational European rabbit production units is to put breeding females under artificial light for 15 to 16 h a day throughout the year, and many rabbit farms in China have copied W o r l d R a b b i t S c i e n c e this lighting schedule. However, as rabbits are active during the dark period and spend lighting hours in dark warrens, the exposure to shorter daylight may be more natural for them and can also avoid unnecessary waste of resources. According to the recommendation of the World Rabbit Science Association, the minimum levels of light intensity in a rabbit building should be 20 lx (Hoy, 2012). In rabbit does, a sufficient level of light intensity is 30-40 lx or at least 50 lx . There is limited research that has examined the effect of light intensity on breeding rabbits, and there are even fewer studies that have investigated the impact of lighting colour on rabbit does. However, for the commercially produced rabbits housed in buildings with artificial light, the duration, intensity and colour of light are very important. Therefore, the purpose of this study was to investigate the effects of different lighting schedules, intensities and colours on the reproduction of female rabbits, to establish a standard lighting scheme for the large-scale rabbit production in East China. Animals and experimental design Experiments were carried out on a commercial rabbit farm in Qingdao (Shandong). Qingdao is located in East China, at a northern latitude of 35°35'-37°09', with large annual variations in photoperiod length. Ambient temperature in the housing facility ranged from 16-28°C, with a relative humidity of 50-75%. Does were kept in single-level individual cages with an area of 0.75 m 2 /doe and a height of 0.4 m. During each experiment, does were fed ad libitum a commercial feed mixture consisting of 16.0% crude protein, 10.5 MJ metabolisable energy and 15.5% crude fibre. Water was also supplied ad libitum. Experiment 1: 5-6 mo old nulliparous Hyla does, with 3.5-4.5 kg of live body weight, were randomly divided into four groups and placed in rabbit houses with different lighting schedules ( Figure 1). Group 1 (16L:8D-con) was the control group, in which a 16-h lighting period was applied continuously throughout the experiment (6:00-22:00; n=184). In Group 2 (16L:8D-18d), a 16-h lighting period was used from 6 d prior to AI until 12 d post-AI (n=179), being the lighting period 12 h for the other days. In Group 3 (16L:8D-6d), a 16-h lighting period was used from 6 d prior to AI to the day of performing AI (n=193), being the lighting period 12 h for the other days. In Group 4 (12L:12D continuous), a daily 12 h light was applied throughout the experiment (6:00-18:00; n=199). This experiment lasted for 84 d (from July to October). In this experiment, 80 lx white light was used, which is the intensity and colour most used in Chinese rabbit farms. The light intensity is based on the height of rabbit eyes in the middle of the cage and was measured by photometer DT-8808 (China Everbest Machinery Industry Co., Ltd, China). After Experiment 1, the lighting period for all does was returned to 16 h per day for more than one month. This was designated the blank period ( Figure 1). Its purpose was to restore the female rabbits affected by different photoperiods to an identical physiological state. Experiments 2 and 3 were carried out almost at the same time and used a mixed population of primiparous and nulliparous rabbits. The primiparous rabbits (total number 755) were the rabbits used in Experiment 1, and the nulliparous rabbits (total number 460) were 5-6 mo old with 3.5-4.5 kg of live body weight. All rabbits were randomly divided into groups. In Experiment 2, does were exposed to lights of 40 lx (n=158), 60 lx (n=148), 80 lx (n=157) and 120 lx (n=159), respectively. The lighting schedule was 16L:8D-18d (Figure 1), based on the results of Experiment 1. And the light was white, which is the colour most used in rabbit farms. In Experiment 3: does were respectively exposed to white (n=146), yellow (n=150), blue (n=148) and red (n=149) colours of light. The lighting schedule was 16L:8D-18d (Figure 1), based on the results of experiment 1. Moreover, light intensity was 80 lx, which is the intensity most used in rabbit farms. Both experiments also lasted for 84 d (from November to February of the following year). For all the experiments, AI was performed on the 18 d of experiment and ovulation was induced by intramuscular injection of LHR-A3 (1μg, Ningbo Second Hormone Factory, China). Pregnancy diagnosis was performed by palpation on the 30 d of experiment (12 d post insemination). Conception rate was calculated as the number of pregnant does over the total number of inseminated does and multiplied by 100. On the day of kindling, total litter size, alive litter size and litter weight were recorded and the kindling rate (the number of kindled does/ the number of inseminated does and multiply by 100) was calculated. After birth, the litter were equalised within groups according to the average number of kits born alive (maximum 10 kits). At the day of weaning (35 d post parturition), the litter size, litter weight and individual kit weight were also recorded, and pre-weaning mortality was calculated as follows: number of kits dead at weaning divided by total number of kits born ×100. Statistical analysis The data on litter size (total and alive), litter weight and individual weight were expressed as means±standard error. One-way repeated analyses of variance (ANOVAs, SPSS software version 17.0) with post hoc Bonferroni were used to compare data. At the same time, conception and kindling rates and pre-weaning mortality were analysed by nonparametric tests based on chi-square. A P value of <0.05 was considered statistically significant. RESULTS AND DISCUSSION The most commonly used lighting schedule in rabbit farms is a continuous 16L:8D photoperiod, which was used as a control group in this experiment. As shown in Table 1, none of reproductive parameters was significantly affected when the duration of 16L:8D photoperiod was shortened from 84 days (whole course) to 18 days (P>0.05). This indicates that a daily lighting period of 16 hours from 6 days before AI to 12 days after AI is enough for the breeding of rabbit does. It will save more electricity than the continuous16L:8D lighting schedule. That is why we used this schedule in the following experiments 2 and 3. The group with a light duration reduced to 6 days (16L:8D-6d) showed similar litter weight at birth to 16L:8D-18d and 12L:12D-con (P>0.05) but different to 16L:8D-con (P<0.05). However, after the application of a continuous 12L:12D lighting schedule, the kindling rate, litter size, litter weight and individual kit weight at weaning were all depressed compared with the control group (P<0.05), although litter weight at birth was slightly increased (P<0.05). Its influence of kindling rate agreed with an investigation that kindling rate of Rex rabbit was decreased with the decrease of light duration from 14 to 12 h (Uzcategui and Johnston, 1992). Furthermore, this result partially confirmed a previous study where kit weight at weaning was reduced when does were submitted to a 12L:12D photoperiod compared with a 16L:8D photoperiod (Mousa-Balabel and Mohamed, 2011). However, different from our result, they found decreases in conception rate, litter size at birth and pre-weaning mortality after the application of a 12L:12D photoperiod. We think these differences were probably caused by the difference in breed of rabbit, sample size and the mating procedure. They used 10 White New Zealand rabbit for each group and adopted a natural way of mating (Mousa-Balabel and Mohamed, 2011). In the present result, total litter size at birth and alive litter size at birth were not obviously changed (P>0.05). However, photoperiod could play a role in these two reproductive parameters, as long as more extreme light durations are applied. light intensity and showed that after five reproduction cycles, no difference was found in kindling rate, but the litter sizes (total born, kits born alive and alive at 21 d) differed by 3-7% in favour of the higher light intensity. In the present study, conception rate, kindling rate, litter size (total and alive) and litter weight at birth of 80 lx group and 120 lx group were higher than those of 40 lx group (P<0.05, Table 2). In addition, litter weight and individual weight at weaning were elevated by 60 lx and 80 lx light over 40 lx light (P<0.05, Table 2). However, light that is too strong can also have an adverse effect, as 120 lx light significantly decreased litter size at weaning compared with 60 lx and 80 lx light (P<0.05, Table 2), and also caused higher mortality (P<0.05, Table 2). The pre-weaning mortality of 40 lx group was the lowest among groups, which was probably due to its smaller litter size at birth, and each kit could get more milk and care from the mother. According to the above results, the light intensity of 80 lx is recommended, which is consistent with the intensity used in most rabbit farms. There The primiparous Hyla does were randomly divided into four groups. They were exposed to a 16L:8D-18d photoperiod with the light intensity of 40, 60, 80 and 120 lx, respectively. a,b,c 435-450 nm, respectively. According to our result, light colour had no evident influence either on the conception rate and kindling rate of rabbit does or on the individual weight of kits at weaning (P>0.05, Table 3). However, yellow light significantly depressed litter size (total and alive) at birth compared with the other three colours (P<0.05, Table 3). Red light had a better effect on litter size and litter weight at weaning than blue and white light (P<0.05, Table 3). This was beyond our expectations, as the colour vision of the rabbit is limited so that they can only detect the wavelengths between blue and green (Nuboer, 1985). The function of red light may be due to its high biological permeability, caused by its long wavelength. It is known that lights of different wavelengths have different degrees of absorption, scattering and reflection by biological media and tissues. Red light, due to its long wavelength, has the highest bio-penetrability (Li et al., 2016). It has been used to stimulate neural activity and improve brain function in brain photobiomodulation therapy (Salehpour et al., 2018). Recent studies have also shown that it alters the inflammation and immune response, as well as promoting healing in several tissues (Li et al., 2016;Shi et al., 2018). Therefore, the red light may enter the cerebral cortex or other tissues of the rabbit and cause complex biological reactions, leading to a positive impact on their health. Undoubtedly, further studies are necessary. The pre-weaning mortality was the lowest in the yellow group, which was probably due to its smaller litter size at birth as well. Considering that the majority of Chinese rabbit farms are using white light now, we recommend trying to replace it with a red light. The influence of light on kits is potentially through different pathways according to different developmental stages. Because rabbit kits open their eyes at 10 d of age (Abdo et al., 2017) and eat solid food from 16 to 18 d (Gidenne and Fortun-Lamothe, 2002), before that time, light induces a difference in kit growth that is probably caused by the different milk yields of rabbit does; however, after that time, light induces weight difference in kits, which may be due to the change in their feed intake. As reported, there were several studies about light affecting feed intake of young rabbits. For example, according to Reyne et al. (1978), when a dark period of 10 h was increased to 16 h, daily feed intake of growing rabbits was increased. CONCLUSIONS In this study, we used a large number of rabbit does (>140/group) and proved that a 16L:8D photoperiod for 18 d (from 6 d before AI to 12 d post-AI) with an 80 lx red light is the best lighting programme for the reproduction of female rabbits in East China.
2021-05-04T22:06:27.884Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "97a9fde3a36a775891660d746aa4d6c93f802994", "oa_license": "CCBYNCSA", "oa_url": "https://polipapers.upv.es/index.php/wrs/article/download/14623/13621", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1c468c5a8c409e134948f0b8ec660fa7ff64e3b1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
248368114
pes2o/s2orc
v3-fos-license
Identification of SARS-CoV-2 variants using viral sequencing for the Centers for Disease Control and Prevention genomic surveillance program Background The Centers for Disease Control and Prevention contracted with laboratories to sequence the SARS-CoV-2 genome from positive samples across the United States to enable public health officials to investigate the impact of variants on disease severity as well as the effectiveness of vaccines and treatment. Herein we present the initial results correlating RT-PCR quality control metrics with sample collection and sequencing methods from full SARS-CoV-2 viral genomic sequencing of 24,441 positive patient samples between April and June 2021. Methods RT-PCR confirmed (N Gene Ct value < 30) positive patient samples, with nucleic acid extracted from saliva, nasopharyngeal and oropharyngeal swabs were selected for viral whole genome SARS-CoV-2 sequencing. Sequencing was performed using Illumina COVIDSeq™ protocol on either the NextSeq550 or NovaSeq6000 systems. Informatic variant calling, and lineage analysis were performed using DRAGEN COVID Lineage applications on Illumina’s Basespace cloud analytical system. All sequence data and variant calls were uploaded to NCBI and GISAID. Results An association was observed between higher sequencing coverage, quality, and samples with a lower Ct value, with < 27 being optimal, across both sequencing platforms and sample collection methods. Both nasopharyngeal swabs and saliva samples were found to be optimal samples of choice for SARS-CoV-2 surveillance sequencing studies, both in terms of strain identification and sequencing depth of coverage, with NovaSeq 6000 providing higher coverage than the NextSeq 550. The most frequent variants identified were the B.1.617.2 Delta (India) and P.1 Gamma (Brazil) variants in the samples sequenced between April 2021 and June 2021. At the time of submission, the most common variant > 99% of positives sequenced was Omicron. Conclusion These initial analyses highlight the importance of sequencing platform, sample collection methods, and RT-PCR Ct values in guiding surveillance efforts. These surveillance studies evaluating genetic changes of SARS-CoV-2 have been identified as critical by the CDC that can affect many aspects of public health including transmission, disease severity, diagnostics, therapeutics, and vaccines. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-022-07374-7. Goswami et al. BMC Infectious Diseases (2022) 22:404 Background As the number of patients confirmed to be infected with the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus and developing coronavirus disease 2019 (COVID- 19) increased, international sequencing efforts began to determine genetic variations in SARS-CoV-2 genes that may potentially increase viral transmission and pathogenicity [1]. However, the number of genomes sequenced within the United States (U.S.) deposited in the online genome repository, GISAID, in March of 2021 represented only 1.6% of the number of COVID-19 cases that month. As international sequencing efforts increased, variants of high concern were iden- Among the SARS-CoV-2 variants of concern, Beta B.1.351 (South Africa) and Gamma P.1 (Brazil) have high potential to reduce the efficacy of some vaccines [2][3][4][5]. The B.1.617 variant (India) has become a variant of interest for its high transmission rate and ability to evade immune responses [6]. At the time of submission of this manuscript the first cases of the South African variant B.1.1.529 (Omicron) have been detected, but not yet identified in the U.S. [21] These genomic sequencing efforts have allowed scientists to identify not only SARS-CoV-2 positive patients and viral sequence variants, but to also monitor how new viral variants evolve and to understand how these changes affect the characteristics of the virus, with this information ultimately being used to better understand health impacts. In addition, these variants of concern are actively being monitored to determine if they may reduce the efficacy of currently approved vaccines against SARS-CoV-2 [7]. In March of 2021, the Centers for Disease Control (CDC) contracted with large commercial diagnostic labs to sequence patient samples across the U.S. [8]. These laboratories committed to sequencing up to 6000 samples per week, with the capacity to scale up in response to the nation's needs (Fig. 1). The purpose of this program is to perform routine analysis of genetic sequence data to enable the CDC and its public health partners to identify and characterize variant viruses-either new ones identified in the U.S. or those already identified abroad-and to investigate how variants impact COVID-19 disease severity and the effectiveness of vaccines and treatment. Infinity Biologix LLC (IBX), located in Piscataway NJ, was awarded one of these contracts to participate in this surveillance program. In this study we describe the results of full-length genomic sequencing and surveillance of the SARS-CoV-2 virus from the first 24,441 confirmed positive saliva (SA) samples and a small subset of nasopharyngeal and oropharyngeal (NP, OP) swab-based samples sequenced between April and June 2021. CDC laboratory surveillance program selection IBX was designated by the CDC to participate in its Genomic Surveillance for SARS-CoV-2 Variants program [1]. This program conducts genomic surveillance using a random sampling of previously confirmed positive samples from across the U.S., including all 50 states, Washington DC, Puerto Rico and major U.S. territories and possessions. A target of up to 6000 genomic sequences per week was established including State and Zip Code as the required demographics data. CDC requested, when possible and if available, additional demographic data as age, sex, and ethnicity of the patients. Patients and sample acquisition Diagnostic patient samples for SARS-CoV-2 testing arrived via multiple external test clients, approved by IBX and Food and Drug Administration (FDA). Demographic, vaccination, ethnicity, sex and age were available Keywords: COVID-19, SARS-CoV-2, Centers for Disease Control, Next generation sequencing, Reverse transcription polymerase chain reaction, Cycle threshold, Lineage, Variant (See figure on next page.) Fig. 1 CDC tracking of emerging variants through the pipeline for genomic surveillance (https:// www. cdc. gov/ coron avirus/ 2019-ncov/ varia nts/ cdc-role-surve illan ce. html). As part of the CDC National SARS-CoV-2 Strain Surveillance (NS3) System, contracted laboratories select for sequencing a set of deidentified specimens that were previously subjected to SARS-CoV-2 RT-PCR testing and determined to be positive. Generally, the samples then undergo a three-step process for generating sequence data. Specimen preparation and sequencing: SARS-CoV-2 RNA is extracted and converted to complimentary DNA, enriched, and loaded into the next-generation sequencers. Sequence reads are aligned to SARS-COV-2 reference strain using the k-mer detection method. Aligned reads are then used to generate the consensus sequence, call variants and lineage determination. The information along with sequencing quality control statistics are transferred to the CDC repository electronically. Published data are made available to scientists around the world through public repositories if provided by the patient during sample collection. Most samples (> 95%) were saliva-based and collected in Minnesota (MN) and New Jersey (NJ) given the presence of IBX laboratories in each state. Criteria for SARS-CoV-2 viral sequencing The criteria for positive samples selection were established using a nucleocapsid protein (N Gene) Cycle Threshold (Ct) value threshold of ≤ 30.0 on the IBX TaqPath SARS-CoV-2 QPCR Assay. Per the CDC requirement, samples to be sequenced must have been no more than 10 days old following confirmation of a positive test result. RNA isolation All samples selected for sequencing had nucleic acid freshly extracted from the primary sample source independently of the material extracted for the initial RT-PCR testing. Automated RNA extraction from either SA, NP or OP was carried out using the Chemagic Viral DNA/ RNA 300 Kit H96 (CMG-1033-S, PerkinElmer) on the Chemagic 360 Nucleic Acid Extractor (2024-0020, Perkin Elmer). Real-time PCR for SARS-CoV-2 All samples were tested using the Infinity BiologiX TaqPath Data analysis FASTQ files were generated for each sample after demultiplexing of the raw sequencing data on Base Space Sequence Hub (BSSH). Detection of the virus, generation of the consensus sequence and lineage/clade determination was performed using the DRAGEN COVID Lineage App v3.5.1 and v3.5.2 on BSSH. Statistical analysis Spearman's rank correlation coefficient, a rank-based analogue of Pearson's correlation, was used to measure the possibly non-linear but still monotonic relationship between mean Ct and % Genome Coverage. Tukey's test was used to test for differences in mean Ct value among the three collection methods. Viral detection Each sample was evaluated for the presence of the virus in the sequencing data using a k-mer based algorithm prior to performing variant calling steps. Each read was broken down into all possible contiguous 32 bases segments (32-mers) and compared to a pre generated list of 32-mers from all the amplicons in the CovidSeq assay (98 from SARS-CoV-2 and 11 from human control genes). An amplicon was considered detected when at least 150 k-mers were matched between the sample and reference. When 5 or more SARS-CoV-2 amplicons were detected, the virus was recorded as present in the sample and the variant calling pipeline was invoked. Alignment, variant calling and consensus Reads were aligned against the NC_045512.2 sequence (Severe acute respiratory syndrome coronavirus 2 isolate Wuhan-Hu-1) using the DRAGEN Map/Align algorithm. Reads pairs that aligned to the same start and end positions were marked as duplicate by default (only the highest quality pair is retained for variant calling). No quality pre-filtering of the data was performed by default, but portions of reads that do not match the reference were soft clipped by the aligner (removed for variant calling but retained in the Binary Alignment Map (BAM) file) to eliminate mismatches due to a drop in read quality or chimeric artifacts from the PCR. For variant calling, the DRAGEN Somatic default down sampling parameters were used (no more than 10,000 reads per 300 base window around a variable position and no more than 50 reads starting at the same position were considered, random removal with fixed seed for reproducibility). Each variant was then assigned a somatic quality (SQ) score and marked as 'weak_evidence' if falling below a fixed threshold (SQ < 3.0 hard filtering). Variant calling results were saved in a VCF formatted file and a consensus sequence generated using the bcftools CONSENSUS command. Any base covered with less than 10 reads was assigned an N (hard-masking) and any variable base was assigned the major allele base by default. Leading and trailing masked bases were then removed from the consensus by default. Lineage and clade assignment Consensus sequences were analyzed with the Pangolin and NextClade pipelines to generate the lineage and clade information, respectively. By default, the latest version of the PANGOLearn and NextClade databases were downloaded for the most up to date lineage and clade information. Data delivery to CDC Data delivery to CDC was performed through our internal web application. Along with connecting to the Bas-eSpace file system, the application also connected to the CDC S3 data bucket using standard Amazon Web Services (AWS) S3 command line connection protocol using key based authentication [1]. Data transfer was performed on a per sample basis per CDC guidelines. For each sample the application created a file stack with pertinent files for samples that pass the quality thresholds. The application then opened a connection to the CDC S3 bucket and created a dated folder. Batch folders were created inside the dated primary folder when more than one batch of transfers were processed on a given day. One subfolder per sample was then created in the batch folder. The application then transferred all the pertinent files for the sample into the sample folder and continued this process until all sample data were transferred. The application created a cumulative metadata file containing the metadata associated with the samples transferred and transferred the file to the S3 bucket. Sample characteristics A total of 24,441 SARS-CoV-2 positive samples (April to June of 2021) were sequenced (Additional file 1: Table S1). Of these, the majority (24,237) samples were SA, 131 and 73 were from NP and OP, respectively ( Genome coverage and ambiguity rates between sequencing instruments Of all the samples, 73.5% (17,974) samples were sequenced on the NextSeq 550, while 26.46% (6467) were sequenced on the NovaSeq 6000. The average sequencing coverage between the two was ascertained using average sequencing coverage over the SARS-COV-2 genome and fraction of nucleotides masked due to sequencing ambiguity in the consensus sequence generated. The global average SARS-CoV-2 genome coverage for all samples sequenced across both instruments was 1324×. Samples run on the NovaSeq 6000 had twice the average coverage (2133×) compared to samples run on NextSeq550 (1034×) because more reads per sample were generated on the NovaSeq 6000. The fraction of masked nucleotides in consensus sequence generated was globally 5.9%. The ambiguity fraction rate using the NovaSeq6000 was 2.9% while genomes sequenced on the NextSeq550 had a higher ambiguity rate at 6.9% (Fig. 2a-c). Sequence data quality between sample collection methods The mean RT-PCR Ct values for the three sample types used in this study across NP, OP and SA were 20.83, 21.83 and 22.70 respectively. By Tukey's test, the mean Ct values differed significantly between SA and NP (p = 4e−7), but not between the other sample pairs. With respect to SARS-CoV-2 sequence data quality metrics, several sample specific patterns were identified. The rate of failure of analytical detection of SARS-CoV-2 sequence was highest in OP samples (4.11%, 3 of 73 samples), followed by SA samples (1.58%, 382 of 24,237 samples). The sample type exhibiting the lowest rate of SARS-CoV-2 detection failure was NP (0%, 0 of 131 samples). An important measure of input sample and sequence data quality is reflected in the percentage of consensus sequence masked as ambiguous. Higher values of this percentage are indicative of greater proportion of data that is not informative in variant determination. Figure 3a depicts the average percentage of consensus sequence masked as ambiguous for the three sample types used in this study. With a value Fig. 2 Genome coverage and ambiguity rates between sequencing instruments. a Distribution of average coverage over genome in samples sequenced using NextSeq550 and NovaSeq600. b Distribution of percent ambiguous nucleotides (masked) in consensus sequence. c Fraction of consensus sequence masked due to nucleotide ambiguity in samples sequenced on NextSeq550 and NovaSeq6000 instruments sequenced on NextSeq550 and NovaSeq600 instruments of 0.5%, NP samples yielded the highest quality sequence data, while SA (5.84%) and OP swabs (41%) performing relatively less well. Another critical measure of quality, sequencing depth, was assessed for the three sample types as depicted in Fig. 3b. This analysis determined that NP and SA samples yielded the greatest depth, exhibiting values of 1948× and 1323×, respectively, while OP sample coverage was the lowest at 586×. Association between sequencing-based detection of SARS-CoV-2 virus and baseline RT-PCR Ct values Samples were selected for inclusion in the sequencing study based on a minimum threshold of SARS-CoV-2 detection by RT-PCR (N Gene Ct value threshold ≤ 30). Table 2). Association between Ct value and genome coverage We observed a strong inverse association between higher coverage and N gene Ct values as determined by RT-PCR (partial r, controlling for sequencer, = − 0.58, partial r-squared = 0.34, p < 1e−15) ( Fig. 5) (Fig. 5). Group 1, with lowest Ct samples had the highest genomic coverage while group 4 with highest Ct values, had the lowest mean genomic coverage. We identified only eight samples in our dataset that had mean genome coverage less than 100×. Of the remaining 24,433 samples we determined that a mean Ct value of 26.5 for NextSeq 550 and 27.9 for NovaSeq 6000 was a threshold for producing high quality genome sequencing reads (Fig. 5a-c). Strain prevalence We identified a total of 161 lineages of SARS-CoV-2 variants in our dataset of 24, 237 positive SARS-CoV-2 samples. The 10 most prevalent lineages are listed in Table 3. For a full list of all variants identified refer to Additional file 1: Table S1. The prevalence and transmission of Delta (B.1.617) continued to rise and was the most prevalent between the months of July and December 2021 (Fig. 6a). The only CDC variant of interest that was not identified in our study within the sample set and time frame tested was B.1.617.3. Further details on trends in lineage discovery in our study as the data continued to be accumulated between the months of June 2021 to February 2022 demonstrated a rapid rise and major prevalence of the Omicron variant BA.1 are provided in Additional file 1: Table S1 and Fig. 6a and b. Extending the data set into March 2022 (Fig. 6b) sub-lineages of BA.1, BA 1.1 and BA.2 continue to increase at the time of this submission. Discussion Genomics-based SARS-CoV-2 surveillance is an important tool for monitoring viral transmission during the current and future phases of the COVID-19 pandemic. In this work, we sequenced the genomes from > 24,000 SARS-CoV-2-positive samples collected during routine diagnostic testing. Furthermore, we analyzed the genomic epidemiology and sequencing applications of SARS-CoV-2 for the CDC surveillance program between March 2021 and June 2021, with a focus on variant and lineage determination, quality of sequencing results between sequencing instruments and the association between Ct values, genome coverage and variant detection. Various NGS-based approaches have been developed to perform SARS-CoV-2 WGS using different sequencing platforms [12,13]. These include direct RNA sequencing and metagenomics, amplicon-based methods and oligonucleotide capture-based methods. We employed COVIDseq for mass-scale SARS-CoV-2 genomic surveillance [14] and demonstrated that COVIDseq enables near-complete coverage of the SARS-CoV-2 genome. We used statistical analysis to demonstrate that there were differences in average coverage between the sequencing instruments with the NovaSeq 6000 having larger output per sample compared to the NextSeq 550. In addition, we determined that a Ct value of 26.5 for NextSeq and 27.9 for NovaSeq served as thresholds for producing high quality genomes, above these thresholds sequencing quality degraded. High quality genomes were identified as ones that have a fraction of ambiguous nucleotides in the consensus sequence of less than 10% and average genome coverage > 100×. Sequencing of samples with Ct values > 35 has previously been reported to show a sizable fraction of the reads are aligned to the human genome, independently of the method used to prepare the libraries, resulting in inconsistency in lineage and variant detections [14]. We demonstrate that reports regarding variants of concern, such as transmission of the Delta variant, are consistent with other US and international reports [15]. We identified a total of 161 lineages of the SARS-CoV-2 variants in our dataset, between April to June 2021, and the top 10 lineages were consistent with data reported to date beyond the present study. In addition, as is the nature of RNA viruses, each new variant and strains that are identified may gain a natural advantage and become the dominant strain [15,16] (Fig. 6). Moreover, with the increased number of positive cases in summer and winter of 2021, these data may also be consistent with vaccine breakthrough infections [15]. However, additional studies are warranted. Additional variants, such as the Delta variant, where the variant of interest may not Although the Delta variant continues to represent most SARS-CoV-2 infections, sequencing data between the months of June and September 2021 are demonstrated increasing rates of transmission of another variant of concern, the Mu (B.2.621) variant, first identified in Colombia in January of 2021 [17]. Although the Mu variant accounts for only about 0.1% of cases within the present dataset, 77 positive samples with B.1.621 (Mu) have been identified since June of 2021. By comparing the RT-PCR Ct values with the ability to detect variants of SARS-CoV2, a likely correlation with the clinical features of the infection may be inferred. Increased genome coverage associated with lower Ct values are likely due to higher viral load in each sample [18]. These results are consistent with previous reports where the ability to accurately detect variants correlates with lower Ct values [19]. However, most of these samples are from SA-based collections. Conflicting evidence regarding the sensitivity between various collection methods for detecting SARS-CoV-2 positive patients have been previously reported SARS-CoV-2 [6,20,21]. Thus comparison between SA and other collections method remains to be performed. The data presented in this study indicate that NP and SA are the optimal sample types for SARS-CoV-2 surveillance sequencing studies, both in terms of strain identification and sequencing depth of coverage. Additional data and studies will need to be performed to further elucidate sensitivities associated with collection methods. As demonstrated by the data above (Table 1), most samples tested derived from patients who indicated they were not vaccinated at the time of testing. Of those who were vaccinated, at the time of this study, the prominent variant in those breakthrough infections (April-June 2021) was the B.1.1.7 (UK) variant. During this same period 28 cases of the Delta (B.1.617) variant were identified. However, as demonstrated in Fig. 6a, from June to July 2021 the occurrence of the UK variant dramatically decreases with a significant rise in the Delta variant that also correlated with an increase is positive cases throughout the U.S. At that time this accounted for the prominent variant identified throughout the US. Vaccination status of samples sequenced from July through the remainder of 2021 are currently being collected (data not published). Thus, additional studies and analyses will be required to correlate, vaccination status, viral loads, sequencing variant impacts and clinical severity. The incomplete demographic data of all samples in aggregate which were sequenced limits the ability to stratify variant transmission between not only geographical regions but also within Sex, age and other factors that may contribute to increased transmission rate. In November of 2021, the more transmissible variant from South Africa B.1.1.529 (Omicron) was detected [22,23]. This variant quickly became more transmissible and prevalent throughout not only the U.S but the world, accounting for 99% of all SARS-CoV-2 positive cases (Fig. 6a, b). Of interest is the concern that patients positive for this variant may be missed as the variant results in an S gene drop out. For these reasons it is imperative the appropriate methods for detection of positive SARS-CoV-2 patients can detect the N and Orf genes as well. At the time of this submission, sub-lineages of Omicron (BA.2) are increasing within the population accounting for increased positivity rates throughout Europe and is increasing within the U.S. [22] (Fig. 6b). Conclusion In summary, the CDC surveillance screening program for SARS-CoV-2 variant transmission using whole viral genome sequencing is an important approach for population-based surveillance and control of viral transmission in the next phase of the COVID-19 pandemic. As SARS-CoV-2 strains continue to be sequenced by government, private, and academic entities all over the world, the sequencing data must continue to be shared publicly. These sequencing efforts will allow genomicbased surveillance of the virus and data sharing and contribute to efforts to understand population-level spread and control of the SARS-CoV-2 virus.
2022-04-25T13:51:52.366Z
2022-04-25T00:00:00.000
{ "year": 2022, "sha1": "1b9287fe2730fdc05573bd392d793858b60a1578", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1b9287fe2730fdc05573bd392d793858b60a1578", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4817665
pes2o/s2orc
v3-fos-license
The pathological effects of CCR2+ inflammatory monocytes are amplified by an IFNAR1-triggered chemokine feedback loop in highly pathogenic influenza infection Background Highly pathogenic influenza viruses cause high levels of morbidity, including excessive infiltration of leukocytes into the lungs, high viral loads and a cytokine storm. However, the details of how these pathological features unfold in severe influenza infections remain unclear. Accumulation of Gr1 + CD11b + myeloid cells has been observed in highly pathogenic influenza infections but it is not clear how and why they accumulate in the severely inflamed lung. In this study, we selected this cell population as a target to investigate the extreme inflammatory response during severe influenza infection. Results We established H1N1 IAV-infected mouse models using three viruses of varying pathogenicity and noted the accumulation of a defined Gr1 + CD11b + myeloid population correlating with the pathogenicity. Herein, we reported that CCR2+ inflammatory monocytes are the major cell compartments in this population. Of note, impaired clearance of the high pathogenicity virus prolonged IFN expression, leading to CCR2+ inflammatory monocytes amplifying their own recruitment via an interferon-α/β receptor 1 (IFNAR1)-triggered chemokine loop. Blockage of IFNAR1-triggered signaling or inhibition of viral replication by Oseltamivir significantly suppresses the expression of CCR2 ligands and reduced the influx of CCR2+ inflammatory monocytes. Furthermore, trafficking of CCR2+ inflammatory monocytes from the bone marrow to the lung was evidenced by a CCR2-dependent chemotaxis. Importantly, leukocyte infiltration, cytokine storm and expression of iNOS were significantly reduced in CCR2−/− mice lacking infiltrating CCR2+ inflammatory monocytes, enhancing the survival of the infected mice. Conclusions Our results indicated that uncontrolled viral replication leads to excessive production of inflammatory innate immune responses by accumulating CCR2+ inflammatory monocytes, which contribute to the fatal outcomes of high pathogenicity virus infections. Background Influenza A virus (IAV) is a common human respiratory virus and causes seasonal epidemic and pandemic infections. In the past 100 years, pandemics of influenza have been caused by the IAV strains H1N1 (1918), H2N2 (1957), H3N2 (1968) and H1N1 (2009) [1,2]. These pandemic strains vary in their virulence and pathogenicy. Compared to the 1968 and 2009 pandemics, the 1957 pandemic featured intermediate pathogenicity, while the virus causing the 1918 pandemic was relatively highly pathogenic in the human population [2]. Currently we are threatened by sporadic infections by emerging avian IAVs, including highly pathogenic avian H5N1 and H7N9 viruses [1,3]. Two well known, highly pathogenic IAVs, 1918 H1N1 and avian H5N1 cause high levels of morbidity including excessive infiltration of neutrophils and monocytes into the lungs, high viral loads and hypercytokinemia, with significant increases of IL-1, IL-6, IL-8, TNF, CXCL10 and CCL2 in the patients' plasma [4][5][6]. Thus, cytokines and chemokines induced at high levels by IAV infections have become targets for the development of IAV therapy. However, the results of experiments using knock-out mice indicate that none of them alone determines highly pathogenic virus-induced lethality [7,8]. Thus, we used another approach to identify the immune cell types that are recruited during infection and contribute to the excessive inflammatory responses during highly pathogenic virus infection. H1N1 IAV circulate continuously in the human population, and the three H1N1 strains selected for this study display low, intermediate and high virulence in mice as follows: (1) seasonal H1N1 A/Taiwan/141/02 (141; low virulence); (2) pandemic H1N1 A/Taiwan/126/2009 (swineorigin influenza virus, SOIV; intermediate virulence) and (3) mouse adapted H1N1 A/Puerto Rico/8/34 (PR8; high virulence). Using these mouse models, we demonstrated that rate of viral clearance and disease severity is correlated with the numbers of a defined Gr1 + CD11b + myeloid population in the lung. Until now, it is not clear how and why they accumulate in the severely inflamed lung. In this study, we selected this cell population as a target to investigate the extreme inflammatory response during severe IAV infection. In this paper, we report that CCR2+ inflammatory monocytes are the major cell components in this defined Gr1 + CD11b + myeloid population. Multiple roles of CCR2+ inflammatory monocytes during viral infections have been reported: promoting host survival of West Nile virus-induced encephalitis and IAV and mouse hepatitis virus infections [9][10][11][12], stimulating anti-viral Th1 immunity in HSV-2 infection [13] and suppressing anti-viral CD8 T cell responses in mouse cytomegalovirus (MCMV) and persistent lymphocytic choriomeningitis virus (LCMV) infections [14,15]. These results indicate that CCR2+ inflammatory monocytes play a double edge sword in anti-viral responses and immunopathogenesis. Using established infection models with variable rates of viral clearance, which are accompanied by different levels of inflammatory infiltrates, we found that an amplified inflammatory chemokine feedback loop links the impaired clearance of highly pathogenic virus and a massive infiltration of CCR2+ inflammatory monocytes. So, we sought to investigate which cell types are responsible for the production of CCR2 ligands. Furthermore, we identified the inflammatory signals that are triggered by an impaired anti-viral response to induce expression of CCR2 ligands. Finally, the pathological effects of excessive accumulated CCR2+ inflammatory monocytes were explored during highly pathogenic IAV infection. Overall, we provided a comprehensive study to address the detail mechanism why and how accumulated CCR2+ inflammatory monocytes involved in highly pathogenic IAV infections. Impaired clearance of virus led to spread of virus to newly arrived CCR2+ inflammatory monocytes and to sustain production of IFNAR1-induced CCR2 ligands, which attract BM-derived CCR2+ monocytes migrated to inflamed lung and amplify their own recruitment continuously through the IFNAR1-dependent chemokine feedback loop, resulting in an enhancement of CCR2+ inflammatory monocytes-mediated pathological effects. Mouse strains C57BL/6 and CCR2−/− mice were purchased from Jackson Laboratory (Bar Harbor, ME, USA). IFNAR1−/− mice were obtained from Dr. Chien-Kuo Lee (Graduate Institute of Immunology, National Taiwan University, Taipei, Taiwan). MyD88−/− mice were obtained from Hui-Chen Chen (Graduate Institute of Basic Medical Science, China Medical University, Taichung, Taiwan). Mice were maintained under specific pathogen free conditions in Chang Gung University. All animal experiments were performed according to the animal protocol approved by the Institutional Animal Care and User Committee of Chang Gung University and in accordance with the guidelines of Animal Care and Use of Laboratory Animals of the Taiwanese Council of Agriculture. Virus preparation and inoculation All segmented expression plasmids of IAV were kindly provided by Dr. Shin-Ru Shih of Research Center for Emerging Viral Infections, College of Medicine, Chang Gung University, Taiwan. Recombinant IAVs, seasonal 141, pandemic SOIV and mouse adapted PR8 were generated using a reverse genetics system, according to previous reports [16,17]. Briefly, 293 T cells were transfected by using 15 μl Trans IT-LT1 (Mirus Bio LLC) with 1 μg per each plasmid (pPolI-PB2, −PB1, −PA, −HA, −NP, −NA, −M, −NS of 141, SOIV or PR8). Recombinant IAVs were harvested and propagated in 10 day-old embryonated chicken eggs. Harvested viruses were aliquoted and stored at −80°C until use. For IAV inoculation, mouse was infected intranasally with 200 PFU of virus. Plaque assays Lungs were harvested and grind tissue suspension were frozen in 600 μl aliquots. Viral supernatant was thawed and then 10 folds serially diluted. MDCK cells were cultured at a density of 1 × 10 6 cells/well in a 6 well-plate. One hundred microliter of each serial dilution containing trypsin was added to 90% confluent of MDCK cells. After 1 hour incubation, each well was overlaid with a ratio of 1:1 mixture of 0.8% agarose and 2× serum free DMEM to wells. Two days later, the plaques were visualized by addition of 1% crystal violet and plaque forming unit (units/lung) was calculated. Preparation of lung leucocytes, mediastinal lymph node (MLN), bone marrow (BM) cells and PBMC Harvested lungs were homogenized using a metal mesh and the suspension was treated with type I collagenase (Invitrogen) per lung for 30 mins at 37°C. Cells were recovered and washed once with complete PRMI medium containing 10% FBS, 1 mM glutamine, 100 U/ml of penicillin and 100 μg/ml of streptomycin. The pelvic and femoral bones were harvested and BM cells were flushed out with complete RPMI medium by insertion of a 1 ml syringe with a 25G needle into one end of the bone. MLN cells were homogenized using glass slides with ground edges. Leukocytes were obtained from the lungs, peripheral blood, BM and MLN after RBC lysis buffer treatment. Cytokine antibody array and ELISA To obtain bronchoalveolar lavage fluid (BALF), airways were flushed three times with 0.5 ml sterile PBS and centrifuged to remove infiltrating cells. Pooled BALFs were assayed using the R & D mouse cytokine arrays (R & D Systems, Inc.) according to the manufacturer's instructions. CCL2, CCL7 and CCL12 proteins were measured in serum using ELISA kits (eBioscience) according to the manufacturer's instructions. Immunofluorescent surface and intracellular staining Two million cells were stained with fluorescently labeled mAbs, including Gr1, CD11b, Ly6C, Ly6G, CCR2 and CX3CR1 for 30 min at 4°C. All Abs were purchased from BD Biosciences, except for CCR2 mAb (R & D Systems). After staining, the cells were fixed with Cytofix (BD Biosciences) for 5 min at 4°C. For intracellular staining, cells were stained with fluorescently labeled anti-Gr1, −CD11b and -Ly6C mAbs and then fixed with Cytofix/cytoperm (BD Biosciences) for 20 mins at 4°C. Fixed cells were further stained with FITC-labeled anti-IAV nucleoprotein (NP) Ab (Abcam) for another 30 min at 4°C. Finally, the cells were washed and re-suspended in FACS buffer (PBS with 2% FBS) and analyzed by LSRII flow cytometry (BD Biosciences). RNA extraction, reverse transcription and quantitative polymerase chain reaction (RT-QPCR) Total RNA was extracted from isolated or sorted cells using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. RNA was used to synthesize cDNA with Superscript III reverese transcriptase (Invitrogen). TaqMan® Gene Expression Assays (Applied Biosystems) were performed to detect mouse CCL2, CCL7, CCL12, iNOS, IFNβ and GADPH mRNAs. Expression of the various genes was normalized with the GADPH level in each group. Relative gene expression was determined using △△Ct analysis. Western blotting Frozen lung tissues were lysed using lysis buffer (100 mM Tris, 250 mM NaCl, 0.5% sodium deoxycholate, 1 mM PMSF and 0.5% NP40). Tissue lysates were resolved by electrophoresis in SDS-polyacrylamide gels and electrotransferred onto Hybond-P PVDF membranes (GE Healthcare). Milk blocked blots were incubated with anti-actin and -NP antibodies at 4°C overnight and then washed and incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies (Jackson ImmunoResearch) at room temperature for 1 hr. The proteins were revealed using the Immobilon Western Chemiluminescent HRP Substrate (Millipore). Oseltamivir treatment PR8-infected mouse was treated with 50 mg Oseltamivir daily according to a previous report [18]. Figure 1 (See legend on next page.) antibody (eBioscience). After 3 days, infiltrating cells were counted and then stained with specific Abs against with Gr1, CD11b, Ly6G, Ly6C and CCR2. Treatment with anti-IFNAR1 blocking antibody Adoptive transfer of BM enriched CCR2+ monocytes into mice BM cells from naïve B6 mice were harvested and monocytes were enriched by negative selection using an EasySep™ Mouse Monocyte Enrichment Kit and EasySep™ magnet system (STEMCELL Technologies Inc.). Enriched monocytes were suspended in PBS at a concentration of 2.0 × 10 7 cells/ml and incubated with 5 μM carboxyfluorescein diacetate succinimidyl ester (CFSE, Invitrogen) solution for 12 min at 37°C. One million CFSE-labeled cells were adoptively transferred via the tail vein into naïve or virusinfected mice. After 2 days, leukocytes were harvested from the lungs and stained with anti-Ly6C and anti-CCR2 antibodies. Finally, CCR2 + CFSE + transferred monocytes were traced using flow cytometric analysis. Statistical analysis Statistical significance of the data was analyzed by Student's two-tailed t test. Excessive accumulation of CCR2+ inflammatory monocytes in severe IAV infection We observed varying levels of body weight change and lung inflammation in the infected mice and investigated which infiltrating cell type was associated with severe inflammation. As shown in Figure 1A, mice infected with the mild 141 strain lost 5%-10% of their original body weight, while the moderate SOIV strain caused 15%-20% original body weight loss. Notably, severe PR8 infection caused progressive weight loss and led to 100% mortality in the infected mice at day 7-10 post-infection. In these infections, lung inflammation was dramatically correlated with body weight loss at day 7 post-infection ( Figure 1B). Furthermore, we demonstrated that a defined Gr1 + CD11b + myeloid population is preferentially recruited to the infected lung, but only few to MLN ( Figure 1C). Of interest, the total numbers of infiltrating leukocytes and Gr1 + CD11b + cells were significantly associated with the severity of inflammation ( Figure 1D and E). Gr1 + CD11b + cells are a heterogeneous cell population, so the true identity of major infiltrating cells should be further characterized using the Wright stain and by the expression of Ly6G and Ly6C on cell surface. Gr1 + CD11b + sorted cells consisted mostly of mononuclear cells containing abundant cytoplasmic vacuoles and few segmented granulocytes ( Figure 1F, upper panel). Furthermore, Gr1 + CD11b + cells are composed of appoximately 68-81% monocytes (Ly6G-Ly6C high ) and 19-32% granulocytes (Ly6G + Ly6C intermediate ) ( Figure 1F, lower panel). Using specific Abs against surface CCR2 and CX3CR1, we further demonstrated that the infiltrating monocytes in the lungs were Ly6C high CCR2+ inflammatory monocytes but not Ly6C low CX3CR1+ patrolling monocytes ( Figure 1G). Importantly, the numbers of infiltrating CCR2+ inflammatory monocytes were highly associated with the severity of inflammation ( Figure 1H). Cytokine and chemokine profiling of BALFs To investigate the mechanism of extensive accumulation of CCR2+ inflammatory monocytes in severe inflammation, the cytokines and chemokines listed in Figure 2A were evaluated. According to the results of protein arrays, levels of G-CSF, CCL1, CCL2, CCL12, IL-10, CXCL9, IL-16 and CCL5 were correlated with the severity of lung inflammation ( Figure 2A). Notably, both CCL2 and CCL12 are ligands of CCR2, in addition to CCL7. So, we speculated that the aggressive recruitment of CCR2+ inflammatory monocytes is linked to expression of CCR2 ligands. In Figure 1C, we found that Gr1 + CD11b + cells preferentially migrate to the lung but not to MLN. Therefore, we suggested that leukocytes infiltrating the lung may frequently induce CCR2 ligands to attract CCR2+ inflammatory monocytes. Indeed, all transcripts of CCR2 ligands (See figure on previous page.) Figure 1 Excessive accumulation of CCR2+ inflammatory monocytes in severe IAV infection. C57BL/6 mice were infected with 200 PFU of 141, SOIV or PR8 viruses. (A) Body weights were monitored daily until day 14 post-infection (n = 6 -8 per group, mean ± SEM). (B) Appearance of lung inflammation was photographed at days 3 and 7 post-infection (n = 3 per group). (C) Total leukocytes were stained with Abs against Gr1 and CD11b. The percentage of Gr1 + CD11b + myeloid cells was analyzed by flow cytometry. (D) Total leukocytes were harvested from the lungs at the time points indicated and counted by trypan blue exclusion. These data are a composite of four to seven independent experiments (n = 3 per group, mean ± SEM; n.s: no significant difference; *P < 0.05; **P < 0.01). (E) Numbers of Gr1 + CD11b + myeloid cell of lung were shown. These data are a composite of four independent experiments (n = 3 per group, mean ± SEM; ns: no significant difference; *P < 0.05; ** P < 0.01). (F, upper panel) Gr1 + CD11b + cells were sorted from infiltrating leukocytes and then stained by Wright stain. The cell morphology was photographed under 1000× magnification using an Olympus microscope. Granulocytes are indicated by arrow heads and monocytes are indicated by arrows. (F, lower panel). The percentage of Ly6G-Ly6C high monocytes in the Gr1 + CD11b + gated population is shown. Dot plots are the representative result from three repeated experiments with three mice per group. were over 4000 fold higher in the lung than in MLN ( Figure 2B). In addition, the levels of CCR2 ligands in sera were clearly correlated with the numbers of infiltrating CCR2+ inflammatory monocytes ( Figure 2C). These results suggested that robust expression of CCR2 ligands may contribute to the aggressive recruitment of CCR2+ inflammatory monocytes into the lungs. Induction of CCR2 ligands by CCR2+ inflammatory monocytes We sought to determine whether the infiltrating Gr1 + CD11b + cells are possible producers of CCR2 ligands. To test this possibility, total infiltrating leukocytes were separated into Gr1 + CD11b + cells and Gr1-CD11b-cells using a cell sorter ( Figure 3A). Compared to leukocytes in the lungs of naïve mice, infiltrating leukocytes harvested from virus-infected mice had tens-to thousands-fold induction of CCR2 ligands ( Figure 3B). The relative fold induction of CCR2 ligands was similar between total leukocytes and Gr1 + CD11b + sorted cells, suggesting that Gr1 + CD11b + cells are probably the main producers of CCR2 ligands. To confirm that CCR2+ inflammatory monocytes were producers of CCR2 ligands, granulocytes and monocytes were sorted from the Gr1 + CD11b + myeloid population ( Figure 3C). As shown in Figure 3D, both cell types could express CCL2, CCL7 and CCL12, but more expression of these CCR2 ligands was seen in monocytes. Thus, our results suggested that infiltrating CCR2+ inflammatory monocytes act positively in a chemokine feedback loop to recruit more CCR2+ inflammatory monocytes. Induction of CCR2 ligands is dependent on IFNAR1-triggered signaling We next sought to determine which inflammatory signaling pathway was responsible for the induction of CCR2 ligands. Previous studies indicated that signaling pathways of MyD88 and type I IFN could modulate the recruitment of myeloid cells [19,20]. Therefore, MyD88−/− and IFNAR1−/− mice were used in this study. Compared to infected WT and MyD88−/− mice, the expression of CCR2 ligands by Gr1 + CD11b + cells was significantly reduced in infected IFNAR1−/− mice ( Figure 4A). In addition, we found that the percentage of CCR2+ inflammatory monocytes was only reduced in IFNAR1−/− mice. In Figure 4B, CCR2+ inflammatory monocytes accounted for 81.8 ± 1.1% of total leukocytes in infected WT mice and 84.5 ± 4.5% in infected MyD88 deficient mice; however, CCR2+ inflammatory monocytes accounted for only 39.8 ± 0.35% in infected IFNAR1−/− mice. Thus, accumulation of CCR2+ inflammatory monocytes was suppressed when the IFNAR1-induced expression of CCR2 ligands was interrupted. Because aggressive recruitment of Gr1 + CD11b + cells was observed after day 3 post-infection ( Figure 1E), we wondered whether intranasal treatment with an anti-IFNAR1 blocking antibody at day 3 post-infection could interrupt the influx of CCR2+ inflammatory monocytes. In Figure 4C and D, the recruitment of CCR2+ inflammatory monocytes was reduced significantly in anti-IFNAR1 blocking antibody-treated mice, but not in isotype control-treated mice. Overall, these data implied that excessive recruitment of CCR2+ inflammatory monocytes contributes to continuous activation of IFNAR1-induced expression of CCR2 ligands. Impaired anti-viral responses prolong IFNβ expression Type I IFNs (IFNα and IFNβ) are considerd to bind the heterodimeric complexes of IFNAR1 and IFNAR2. Recent study has shown that induction of CCL2 and CCL7 is triggered by the IFNAR1-IFNβ signaling in IFNAR2−/− mice [21]. In addition, we also observed differential expression of CCR2 ligands among Gr1 + CD11b + sorted cells in 141, SOIV and PR8 infections ( Figure 3B). Therefore, we examined the expression levels of IFNβ in all infected mice. As expected, expression of IFNβ as detected only in the Gr1 + CD11b + sorted cells harvested from PR8infected mice at day 7 post-infection ( Figure 5A). In addtion, both granulocytes and monocytes in Gr1 + CD11b + population could express IFNβ (data not shown). Because detectable IFNβ production reflects activated viral replication, the anti-viral responses of the host were examined by measuring virus titers and detecting influenza NP expression in the infected lung. As shown in Figure 5B and C, 141-infected mice completely eliminated the virus at day 7. SOIV-infected mice still showed weak expression Abs. Gr1 + CD11b + Ly6G + and Gr1 + CD11b + Ly6G-cells were sorted. (D) RNA was extracted from total leukocytes, Gr1 + CD11b + Ly6G + sorted cells and Gr1 + CD11b + Ly6G-sorted cells from PR8-infected mice and expression of CCL2, CCL7 and CCL12 was measured by RT-QPCR. The mRNA relative folds were determined by normalizing the level of each group to the corresponding GAPDH level and then to total leukocytes from naïve mice (mean ± SEM; ns: no significant difference; *P < 0.05; **P < 0.01; ***P < 0.001). Experiment (n = 4-5 mice per group) was performed twice and one representative is shown. of NP at day 7 and the host completely cleared the virus at day 8 post-infection. Of note, PR8-infected lungs still showed strong NP expression and viral replication at day 7-8 post-infection. These data suggested that the duration of IFNβ production is a function of the rate of viral clearance. Next, we sought to explore why Gr1 + CD11b + cells produce abundant IFNβ in PR8-infected mice in the late phase of infection. We hypothesized that recruited CCR2+ inflammatory monocytes are infected by the PR8 virus, resulting in amplified production of IFNβ. Indeed, expression of influenza NP was detected in CCR2+ inflammatory monocytes in PR8-infected mice ( Figure 5D). Thus, our results suggested that impaired clearance of PR8 virus prolonged expression of IFNβ, which led to infected CCR2+ inflammatory monocytes amplifying their own recruitment by an IFNAR1-triggered chemokine feedback loop. To determine whether high viral loads are potent inducers for CCR2+ monocyte infiltration, an anti-viral drug, Oseltamivir, was used to suppress virus replication in infected mice. In Figure 5E, body weight loss was attenuated when infected mice received Oseltamivir treatment, demonstrating the efficacy of Oseltamivir. Influx of CCR2+ inflammatory monocytes was dramatically reduced in Oseltamivir-treated mice, compared to PBS-treated mice ( Figure 5F). Taken together, our results supported the concept that continuous recruitment of CCR2+ inflammatory monocytes by the IFNAR1-triggered chemokine feedback loop is attributable to the extended duration of IFNβ expression in the late phase of infection. The balance of CCR2+ inflammatory monocytes between the BM and lungs We next examined the source of infiltrating CCR2+ inflammatory monocytes in the host. In general, CCR2+ inflammatory monocytes are generated from the BM and migrate rapidly to inflamed sites following pathogen invasion [22]. Therefore, we first checked the proportions of Gr1 + CD11b + cells and CCR2+ inflammatory monocytes in the PBMC and BM during infection. As shown in Figure 6A-D, the proportion of Gr1 + CD11b + cells and CCR2+ inflammatory monocytes in the PBMC was positively correlated with disease severity. In contrast, the proportion of CCR2+ monocytes in the BM was inversely correlated with the severity of inflammation ( Figure 6E). In Figure 6F, total CCR2+ monocytes were significantly decreased in the BM of PR8-infected mice, compared to those in 141-and SOIV-infected mice. These results implied that CCR2+ monocytes are rapidly recruited from the BM to the infected lung and mobilization of these cells is possibly dependent on the expression of CCR2 ligands. To demonstrate CCR2mediated trafficking of inflammatory monocytes to the lungs, BM-enriched CCR2+ monocytes were isolated from naïve mice, labeled with CFSE, and then adoptively transferred to naïve, 141-, SOIV-or PR8-infected mice. After 2 days, transferred CCR2+ inflammatory monocytes were traced by the CCR2 + CFSE + signals on the cells ( Figure 6G). In Figure 6H, more transferred CCR2 + CFSE + monocytes were found in PR8-infected lungs than in those 141-and SOIV-infected lungs. CCR2−/− mice were used to confirm that influx of CCR2+ inflammatory monocytes was dependent on CCR2-triggered chemotaxis. In Figure 6I, the proportion of Gr1 + CD11b + cells was significantly decreased in infected CCR2−/− mice, compared to WT mice. In Figure 6J, only few CCR2+ inflammatory monocytes were detected in the blood and lungs of infected CCR2−/− mice, suggesting the importance of CCR2-driven monocytes localization within the infected lung. Pathological effects of CCR2+ inflammatory monocytes upon IAV infection A previous study showed that monocytes are retained in the BM when they lack CCR2 expression [23]. To investigate the biological consequences of an excessive accumulation of CCR2+ inflammatory monocytes in the lungs, CCR2−/− mice were used to examine leukocyte infiltration, cytokine storm, expression of iNOS and the survival rate after a lethal dose challenge of PR8 virus. In the absence of infiltrating CCR2+ inflammatory monocytes, total leukocytes in the lung and expression of CCL1, sICAM-1, IFNγ, IL-1ra, IL-16, M-CSF, CCL2, CCL12 and CXCL9 in BALF were decreased, suggesting that CCR2+ inflammatory monocytes contribute to the expression of these molecules ( Figure 7A and B). Consistent with the results from (See figure on previous page.) Figure 4 Induction of CCR2 ligands is dependent on the IFNAR1-triggered signaling. (A) Gr1 + CD11b + myeloid cells were isolated from the lungs of PR8-infected WT, MyD88−/− and IFNAR1−/− mice and RNAs were extracted. Expression of CCL2, CCL7 and CCL12 was measured by RT-QPCR. The mRNA relative folds were determined by normalizing the level of each group to its GAPDH and then to WT infected mice (mean ± SEM; ns: no significant difference; ***P < 0.001). These data are a composite of three independent experiments (n = 6 mice per group). (B) Total leukocytes were isolated from PR8-infected WT, MyD88−/− or IFNAR1−/− mice and the cells were stained with anti-Gr1, −CD11b, −Ly6C and -CCR2 Abs. The percentage of Ly6C high CCR2+ inflammatory monocytes in Gr1 + CD11b + gated cells is shown. This is a representative result of two repeated experiments with two-three mice per group. (C-D) Day 3 PR8-infected mice were treated either with isotype control antibody or anti-IFNAR1 blocking antibody. After 3 days, leukocytes were harvested and stained with anti-Gr1, −CD11b, −Ly6C and -CCR2 Abs. The percentage of Ly6C high CCR2+ inflammatory monocytes in Gr1 + CD11b + gated cells and numbers of Ly6C high CCR2+ inflammatory monocytes are shown (mean ± SEM; ***P < 0.001). These data are a composite of two independent experiments (Isotype control, n = 5; anti-IFNAR1 Ab treatment, n = 6). cytokine arrays, expression of CCR2 ligands was also significantly decreased in the infiltrating leukocytes of CCR2−/− mice, compared to WT mice ( Figure 7C). A previous report has shown that iNOS is induced in activated myeloid cells and significantly involved in the development of IAV-induced pneumonitis [24]. As shown in Figure 7D, Gr1 + CD11b + cells were the predominant producers of iNOS. Interestingly, expression of iNOS was correlated with the severity of inflammation. To demonstrate further the importance of CCR2+ inflammatory monocytes-mediated immunopathological effects, expression of iNOS and the survival rate were compared in PR8-infected WT and CCR2−/− mice. Expression of iNOS transcripts was dramatically reduced in infected CCR2−/− mice, ( Figure 7E). Finally, 38.5% of infected CCR2−/− mice, but none of the WT mice, survived a lethal dose challenge of PR8 virus ( Figure 7F). Thus, infiltrating CCR2+ inflammatory monocytes play a pivotal role in highly virulent IAV infection-mediated pathological effects. Discussion IAV not only infect pulmonary epithelial cells, endothelial cells and resident alveolar macrophages but also infiltrating granulocytes, monocytes and dendritic cells [18,25]. Furthermore, infected leukocytes are the main contributors to aggressive production of inflammatory innate immune responses. Before entering the inflamed lung, these uninfected infiltrates are already primed with BALFs were subjected to cytokine or chemokine expression analysis using cytokine protein arrays (n = 6 mice per group). (C) Relative expression of CCL2, CCL7 and CCL12 was measured by RT-QPCR. The mRNA relative folds were determined by normalizing the level of each group to the corresponding GADPH level and then to total leukocytes from WT mice (mean ± SEM). This is a representative result from two repeated experiments. (D) RNAs were extracted from total leukocytes, Gr1+ CD11b+ sorted cells and Gr1-CD11b-sorted cells from the virus-infected mice indicated. Relative expression of iNOS transcripts was measured by RT-QPCR. The mRNA relative folds were determined by normalizing the level of each group to the corresponding GAPDH level and then to total leukocytes from naïve mice (mean ± SEM). Experiment (n = 3-6 mice per group) was performed twice and one representative is shown. (E) RNAs were harvested from leukocytes isolated from the lungs of WT and CCR2−/− infected mice. Relative expression of iNOS was measured by RT-QPCR. The mRNA relative folds were determined by normalizing the level of each group to the corresponding GAPDH level and then to total leukocytes from WT mice (n = 3 mice per group; mean ± SEM). Experiment was performed twice and one representative is shown. type I IFN, which upregulates the levels of MAD5, RIG-I and IRF7 [26,27]. In addition, these IFN-stimulated molecules coupled with viral nucleic acids are responsible for amplified production of type I IFN [28]. Our findings revealed that the rate of virus clearance determines the duration of IFNβ expression in infiltrating Gr1 + CD11b + cells. Sustained expression of IFNβ was critical for aggressive recruitment of CCR2+ inflammatory monocytes in severe inflammation. When virus replication was suppressed by Oseltamivir, body weight and influx of CCR2+ inflammatory monocytes were significantly reduced. These results indicated that excessive accumulation of CCR2+ inflammatory monocytes plays a crucial role in the pathological outcomes of highly pathogenic H1N1 IAV infections. Recently, we are continuously threatened by sporadic infections by emerging avian influenza viruses, including highly pathogenic avian H5N1 and H7N9 viruses which rapidly develop acute respiratory distress syndrome, including excessive infiltration of neutrophils and monocytes into the lungs, high viral loads and hypercytokinemia [29,30]. Futhermore, it is worth to see whether the same phenomenon is observed in avain flu infections, such as H5N1 and H7N9. If accumulation of CCR2+ inflammatory monocytes is a common phenomenon in highly pathogenic influenza infection, CCR2+ inflammatory monocytes will be a good therapeutic target in infection. The mechanism of accumulation of CCR2+ inflammatory monocytes in severe IAV infection remains largely unclear. Previous reports have shown that small numbers of neutrophils are recruited early in infection, followed by influx of large numbers of monocytes [4]. Based on our result, CCR2 ligands produced by neutrophils might play a key role in the early recruitment of monocytes. In Figure 1F, the ratio of neutrophils and monocytes was skewed according to degrees of inflammation at day 7 post-infection. This result indicated two things: (1) Monocyte attractive chemokines were not only provided by neutrophils but also by other inflammatory cells. Using cell sorting, we showed that accumulated CCR2+ inflammatory monocytes are the main contributors of CCR2 ligands and then amplify their own recruitment. (2) Infiltrating monocytes might interfere with the further influx of neutrophils in severe inflammation. A previous study has demonstrated that type I IFN suppresses neutrophil-mediated chemokine attraction, CXCL1 and CXCL2, leading to impaired recruitment of neutrophils [31]. Therefore, we suggested that sustained expression of IFNβ from CCR2 inflammatory monocytes interrupts the recruitment of neutrophils. Indeed, our data are consistent with previous reports that a doubling of neutrophil numbers is observed in CCR2−/− and IFNAR1−/− mice [20,32]. Production of type I IFN is a double edged sword in terms of viral clearance and virus-mediated pathogenesis. Previous studies have shown that prolonged induction of type I IFN was seen in highly virulent IAV infections, leading to severe consequences: (1) Type I IFN-induced apoptosis of alveolar epithelial cells by TRAIL is observed in severe IAV infections [33]. (2) Type I IFN-induced FasL expression in the epithelial cells of the lung contributes to the severity of infection [34]. (3) Type I IFN mediates the development of post-influenza bacterial infections [31]. In our study, CCR2+ inflammatory monocytes amplify their own recruitment by a prolonged IFNAR1-triggered chemokine feedback loop. In our study, induction of CCR2 ligands in CCR2+ inflammatory monocytes was dependent on the IFNAR1triggered signaling pathway. However, recruitment of CCR2+ inflammatory monocytes could not be completely abolished in IFNAR1−/− mice, suggesting that induction of CCR2 ligands in other cell types by IFNAR1independent pathways may not be excluded. Indeed, expression of CCL2 is regulated by Sphingosine-1phosphate receptor-triggered signaling in pulmonary endothelial cells or by the MyD88-mediated pathway in pulmonary epithelial cells [35][36][37]. In addition, IL-1R signaling is also involved in CCL2 induction in undefined cell types during IAV infection [38]. A previous study has demonstrated that mice deficient in a single ligand, either CCL2 or CCL7, only can block 40-50% monocytes egressing from the BM [39]. Thus, it is not surprising that gene deficiency of CCL2 cannot protect mice against highly pathogenic virus-mediated (See figure on previous page.) Figure 8 Roles of CCR2+ inflammatory monocytes in highly pathogenic IAV infection. We achieved varying degrees of weight loss with mildly, moderately or severely inflamed lungs in mice inoculated with the 141, SOIV or PR8 strains. These H1N1 infection models with variable efficiencies of viral clearance result in the accumulation of varying numbers of CCR2+ inflammatrory monocytes, which are highly associated with the generation of a cytokine storm and expression of iNOS. In the early phase of infection, we propose that a small number of infiltratng CCR2+ inflammatory monocytes are infected with IAV and respond to autocrine and/or paracrine IFNβ, which induces the expression of the CCR2 ligands, CCL2, CCL7 and CCL12. Recruited CCR2+ inflammatory monocytes drive further recruitment of CCR2+ inflammatory monocytes from the BM to the lung through CCR2-dependent chemotaxis. In the late phase of infection, impaired clearance of PR8 virus leads to spread of infection to recently arrived CCR2+ inflammatory monocytes and to sustained production of the IFNAR1-IFNβ signaling axis-induced CCR2 ligands, which cause infiltrating CCR2+ inflammatrory monocytes to amplify their own recruitment continuously through the IFNAR1-dependent chemokine feedback loop. death [7]. Our results indicated that CCL2, CCL7 and CCL12 were highly induced during IAV infections. Therefore, blockage of any single CCR2 ligand is not sufficient to block recruitment of CCR2+ inflammatory monocytes. Recuirted CCR2+ inflammatory monocytes play a critical role in innate and adaptive immune responses during IAV infections. In successful clearance of 141 and SOIV infecions, CCR2+ inflammatory monocytes expressed high levels of IFNγR, MHC class I and MHC class II molecules than those molecules on monocytes isolated from PR8-infected mice (data not shown). Our and previous studies have demonstrated that monocytes are the mainly susceptible cell type to IAV infections [40,41]. Therefore, we suggested that the rate of viral clearance markedly determines the functional direction of inflitrating CCR2+ inflammatory monocytes toward in either protective or pathological role. In infections of MCMV and LCMV, CCR2+ inflammatory monocyte-produced large amount of iNOS and facilitate the production of nitric oxide (NO). NO plays a critical role to impair anti-CD8 T cell responses [14,15]. In our study, CCR2+ inflammatory monocytes expressed iNOS; and its expression was correlated with rate of viral clearance. Thus, these results implied that excessive accumulation of CCR2+ inflammatoy monocytes might interfere effective anti-viral CD8 T-cell responses via excessive NO prodution in highly pathogenic IAV infections. In summary, overabundant innate immune responses produced by monocytes contribute significantly to highly pathogenic virus-mediated fatal outcomes. Based on our findings, the proportion of CCR2+ inflammatory monocytes in the blood and concentration of CCR2 ligands in the serum have potential as translational biomarkers to predict IAV virulence and pathogenesis in an emerging pandemic infection and sporadic infections of avian IAVs. In addition, inhibition of recruitment of CCR2+ inflammatory monocytes or depletion of infiltrating CCR2+ inflammatory monocytes may provide an alternative immunotherapeutic way to reduce the damaging effects of accumulating CCR2+ inflammatory monocytes in highly pathogenic IAV infections. Conclusion The excessive accumulation of Gr1 + CD11b + cells is strongly associated with severe lung pathology in highly pathogenic 1918 H1N1 and avian H5N1 infections [42]. According to a detailed characterization of Gr1 + CD11b + cells, we found that CCR2+ inflammatory monocytes are a prominent cell type and that they contribute to overabundant inflammatory immune responses. In this study, we demonstrated that the accumulation of infiltrating CCR2+ inflammatory monocytes is determined by the efficiency of host in clearing the virus. Based on our findings, the CCR2+ inflammatory monocytes were one of determinants for pathogenicity of highly pathogenic IAV infection ( Figure 8).
2017-07-06T20:59:59.137Z
2014-11-18T00:00:00.000
{ "year": 2014, "sha1": "41a1aa5f66f83bc661c4b047e8fa0abddc26a775", "oa_license": "CCBY", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/s12929-014-0099-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72b6e571746493f62630cc8eb59cd6fae5cb71e2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
255362819
pes2o/s2orc
v3-fos-license
Institutional constraints affecting secondary school student performance: A case study of rural communities in Zimbabwe Abstract The study analyses the institutional constraints affecting child performance in secondary schools within rural communities in Zimbabwe. A qualitative approach employing purposive sampling of participants was incorporated in the study. Eight focus group discussions (FDGs), 16 in-depth interviews and four key informant interviews were conducted in eight rural secondary schools located in Seke and Shamva districts. The general systems theory was embraced as the theoretical framework in this study. The principle of theoretical saturation was applied in both the focus group discussion sessions and interviews where they ran up until a clear pattern emerged and subsequent groups produced no new information. Data were analysed using thematic analysis techniques which systematically coded data to discover prevailing trends. The major highlight of the results is that rural learners have to grapple with a lot of challenges in trying to access education. Inadequate resources, long distances to school and demotivated teachers constitute the main highlights of the findings. The study established that there is a need to explore more issues affecting the realisation of equitable and inclusive education systems specifically for learners in rural communities. This article also posits that addressing institutional constraints that are affecting child performance in rural secondary schools is not cast in stone. Thus, this requires a holistic approach through engagement of all stakeholders involved in the drive towards quality education and leaving no one behind. PUBLIC INTEREST STATEMENT This article examines the institutional constraints faced in an attempt to promote access to quality education for rural learners in Zimbabwe. The study is based on a qualitative enquiry done in Seke and Shamva districts where the participants drawn from teachers, parents, guardians and education officers were targeted. Using the general systems theory, the study reveals that learners in rural communities have to overcome a lot of challenges to get basic education. The resources are limited, the teachers are demotivated, the learners walk long distances to school and access to internet is minimal. The study foregrounds that there is a need to implement policies that are sensitive to poor rural communities in order for inclusive and equitable education to be realised. It is these policies that can transform the education opportunities for the disadvantaged learners who lack meaningful educational resources. Introduction and background Various studies have been conducted in relation to the institutional constraints affecting child performance in rural secondary schoolsfor example,; Munaka (2016), Mudzengerere (2018), Shava (2020), and Nhlumayo (2020). Findings show that learners in rural secondary schools continue to face some institutional constraints in their education system. Despite the government's efforts in coming up with policies and strategies towards addressing educational constraints for learners, there has been no significant changes towards the learning environment in rural communities. Public schools in rural areas also charge levies to the learners which impedes access to education for underprivileged learners. Internationally, some rural schools have been more disadvantaged than urban schools. However, the challenges differ from country to country. In European countries, challenges emanate from the curriculum development as well as the small numbers of students in the rural schools (Kimonen & Nevalainen, 1996). In Africa , generally, the constraints faced by rural students range from poverty, ethnic and economic conflicts, low level of technological advancement, limited infrastructure, poor access to resources and lack of information (Sintema, 2020). By comparing the context of rural secondary school education in Zimbabwe with other regional and international nations, it has been observed that there are existing commonalities pointing out the existence of institutional constraints, which are crucial in formulating sustainable educational changes for learners (Beckman & Gallo, 2015). According to Sintema (2020), rural learners face a number of challenges that have been exacerbated by, for example, the Covid-19 epidemic of which the state of preparedness for rural schools has always been very poor. A similar study examines the potential impact of Covid-19 on education system including those in schools within rural communities. Munaka (2016), complemented other research conducted by reflecting that learners in rural communities are not only disadvantaged in terms of learning institutions but they also do not have adequate learning materials in their communities, for example, they lack access to libraries. According to Sintema (2020), the outbreak of Corona Virus in 2020 ravaged most parts of China, the United States of America, Italy, Spain and other parts of Europe and Africa. This meant that secondary school learners in both rural and urban communities ended Term 1 of the academic year 2020 without sitting for their end-of-term exams due to poor preparations for such an epidemic. Some research conducted in Zimbabwe revealed that students in schools within rural communities continue to operate under very challenging conditions as reflected in the 2019−2020 Primary and Secondary Education Statistics Report (Mathema, 2020). Maunganidze (2019) also concurred with other researchers that over the years, the quality of education has been very poor especially in rural communities as schools continue to face a lot of institutional constraints. Concerns raised about the institutional constraints affecting education came at a time when there is a growing recognition across the globe, regionally and nationally on the potential role of educating children. In line with the foregoing observation, various goals and policies aimed at transforming the quality of education have been developed, while some are still being developed (Shava, 2020& Palmer, 2015. The issue of addressing institutional constraints within the education sector has been very topical at global, regional or national forums. In these forums, it has been noted that there are barriers preventing learners from gaining equal access to education beyond financial challenges (Munaka, 2016). Some declarations have been made in these forums such as the Millennium Development Goals (MDGs) of September 2000 and the Education for Sustainable Development Agenda 2030 Global Goals adopted in 2015 (Shava, 2020). Regionally, there is the Southern African Development Community (SADC) Protocol on Education and Training that guides the SADC members on Education and Skills Development Programme (Mashininga, 2021). The author posits that the protocol facilitates and coordinates the harmonisation and implementation of regional policies and programmes to ensure that there is access to relevant and quality education and training in the SADC region but this has been with little success. In Zimbabwe, the average pass rate among rural secondary schools is far below the national standard with many achieving even a zero percent pass rate attributed to institutional constraints (Hapanyengwi et al., 2018;Nyoni et al., 2017). These constraints comprise type of school leadership, psychological challenges, lack of motivation, lack of extracurricular activities and teacherpupil ratio (Nyoni et al., 2017). Interventions are needed to address the existing institutional constraints in secondary schools within rural communities. There is a need to strengthen the voice of parents and communities by formalising some regulations and policies. This paper recognises the veracity of various researchers such as, Munaka (2016) & Shava (2020 recommendations that there is need for more research and assessment of institutional constraints affecting child performance in secondary schools within rural communities. There is a dearth of the literature with regard to institutional constraints affecting child performances such as child psychological challenges, issues of teacher patriotism as a self-motivation concept and nonavailability of contingent plans to mitigate on non-availability of extra curricula activities (Mashininga, 2021: Masud et al., 2019Munaka, 2016). Whilst education is one of the most effective instruments a nation has at its disposal for promoting sustainable, social and economic development (Shava, 2020), access and quality of education varies significantly across contexts. The 2019−2020 Primary and Secondary Education Statistics Report in Zimbabwe highlighted that parents in rural communities have been discontented with institutional constraints affecting their children's performance in secondary schools (Mathema, 2020). The report highlighted that there is more that still needs to be done as many children in rural communities continue to encounter institutional constraints. Furthermore, School Boards have been putting pressure on District Education Boards to come up with a lasting solution to ensure that learners perform better regardless of their geographical locations, physical challenges or any other form of institutional constraints. In line with this drive, the government of Zimbabwe has made various declarations towards improving the quality of education, for example, Sustainable Development Goal number 4, Millennium Development Goals (Mangena & Chitando, 2011), but with little success. The Education for Sustainable Development through goal number 4 seeks to address issues of educational challenges but so far, the objective has not been achieved (Mudzengerere, 2018). There is a need for an inclusive and equitable quality education that promotes lifelong learning opportunities for all children despite their terrestrial location (i.e., either rural or urban communities) by addressing institutional constraints. Little or no effort has been made in Zimbabwe to look into institutional constraints through Sustainable Development Goals, Millennium Development Goals and Educational Policies among others, but previous research conducted reflect that there are some existing institutional constraints affecting child performance in secondary within rural communities (Munaka, 2016). Despite the fact that policies, strategies and programmes have been formulated to address institutional constraints affecting child performance in secondary schools within rural communities, there is a need to explore the effectiveness of such strategies and programmes. This would ensure that there is inclusive, quality and equitable education for all learners. The study seeks to establish how equitable, inclusive and quality education which is the golden thread that runs through sustainable development address institutional constraints which are affecting children's performance in secondary schools within rural communities. The study acts as a reference point on studies related to educational institutions affecting child performance in secondary schools within rural communities. Students interested in pursuing this area of study will also benefit. This study also builds and strengthens the university's collaboration with the government and drives across disciplinary collaborations/partnerships with various stakeholders leading to new funding streams. Literature review on the institutional constraints affecting quality of education in rural communities This section gives an overview of the literature findings of the studies carried out exploring the various hindrances encountered by learners in rural contexts in trying to access quality education. Equitable education and rural infrastructure Equitable education promotes learners' educational performance especially those from marginalised societies, the physically challenged and those from poor backgrounds (Gautier, 2003). Equity in education entails a situation whereby individual or social circumstances such as gender, ethnic origin or family background, are not obstacles to achieving educational potential (definition of fairness) and that all individuals reach at least a basic minimum level of skills (definition of inclusion). Findings from previous research reflect that the issue of equitable education in rural secondary schools is still a cause for concern. Learners from poor family backgrounds still face numerous constraints, a situation that has hindered equitable education in rural communities (Shava, 2020). Rural areas are remote and poorly developed, consequently the schools in these areas are disadvantaged. They lack basic infrastructure for teaching and learning, roads and other transport, electricity and information communication technologies (ICTs; Mandina, 2012). Thus, most rural areas have a poor socioeconomic background which invariably plays a role in hindering the provision of quality education. Van den Berg (2008) has noted that home background, especially socioeconomic status, is an important determinant of educational outcomes, that strongly affects learning outcomes. The socioeconomic realities of rural areas put the learners at a disadvantage as the requisite materials and conditions required for quality and equitable education will be missing. Many rural families and communities lack transportation and social programmes for their children, a situation which hinders effective rural education (Beeson & Strange, 2000). Van den Berg (2008) points out that quality education mostly depends on teaching and learning processes, the relevance of curriculum, as well as availability of materials and enabling learning environments, but these requirements are in most cases not available in rural schools. Previous researchers also supported this when they noted that students without basic resources in their schools and environments perform poorly as a result of the learning difficulties they experience within their classrooms. As a result, these learners end up obtaining lower test scores as compared to those of learners in environments with required conditions. Generally, rural areas have fewer schools, which are located far between each other and at a significant distance from the communities complicating the accessibility for learners. Some learners travel long distances ranging from 5 to 15 km and therefore suffer from walking fatigue, and this affects the level of concentration, participation and learning outcomes. Rural secondary schools also face a marked shortage of qualified teachers and learning materials and other educational resources, such as libraries (Musarurwa, 2011). In Zimbabwe, most qualified teachers shun rural life, and they end up leaving for better conditions or greener pastures like in South Africa, Namibia or Botswana. These are some setbacks that make rural education in Africa unattractive and challenging. Theoretical framework The General Systems formed the theoretical framework of this study. This is so because systems theory focusses on the interrelationships of elements in nature that exist, for example, in a learning environment . One basic principle of this theory is that everything is connected to everything else. The theory is appropriate to frame this study since it concurs that learning among students can only occur if institutional constraints are holistically tackled to enable efficient and effective teaching to take place in these rural secondary schools. The general systems theory emphasises that a holistic approach is important in that it supports a coordinated approach among key stakeholders. This collaborative approach among various stakeholders goes a long way to address institutional constraints such as lack of learning resources, lack of experienced teachers, psychological challenges and cultural diversity (Darling-Hammond, L., Flook, L., Cook-Harvey, C., Barron, B. & Osher, D, 2020). The theory emphasises a broader perspective on education, which requires rigorous practice, involving approaches, activities and links to outcomes. Therefore, schools need to employ extensive thinking and activities, so that they know what they are doing and measure their actions. The theory emphasises the need to value everyone equally and fairly as this helps in achieving the goals of the schools. The theory provides a basis for the understanding of the dynamics of implementing Sustainable Development Goal number 4 in rural secondary schools that continue to face institutional constraints. The study area In this study, two rural districts, namely, Seke and Shamva, have been chosen. Seke District is one of the communal areas in Mashonaland East Province, which is situated to the eastern side of Harare city. There are approximately 41 km from Harare to the Seke area. This district has a population of 100 756 people (Zimbabwe National Statistics Agency Report, 2022). There are 22 secondary schools with a total of 7 622 students (Ministry of Primary and Secondary School Strategic Plan, 2016-2020). The community relies heavily on peasant farming (Maruve & Chitongo, 2017). Thus, the community's major source of income is agriculture with crops such as maize, tobacco, groundnuts and round nuts. In addition, Shamva District is found in Mashonaland Central Province and is situated 98 km to the northern side of Harare city. The community has a population of 70 701 people (Zimbabwe National Statistics Agency Report, 2022). There are 29 secondary schools within the district with a total of 112 99 students based on 2019 statistics (Ministry of Primary and Secondary School Strategic Plan, 2016-2020). The community's major source of income for their children's school fees is through peasant farming and mining (Helliker et al., 2018). Thus, although the district is agro-based, the community heavily relies on small-scale mining for their survival as well as source of tuition fees. Research approach This study utilised qualitative research methods to investigate institutional constraints affecting child performance in secondary schools within rural communities. A qualitative research method is that type of research that relies on non-numeric data collection and is usually in the form of words (Jackson et al., 2007). Qualitative research allowed the use of several data collection methods such as focus group discussions, in-depth interviews and analysis of documents. Target population and sampling The study targeted rural secondary schools in Seke and Shamva districts in Mashonaland East and Mashonaland Central provinces in Zimbabwe, respectively. Respondents were purposely selected from officials in the ministry of education wherein one representative was chosen in each district. Researchers also interviewed one public service commission official per district. These four respondents constituted key informants who provided rich information pertaining to educational policy implementation. Sixteen in-depth interviews were held with school heads and some purposefully selected teachers. Experience is a key criterion in selecting respondents for in-depth inteviews. Focus group discussions were held with parents and guardians of learners whose schools were purposefully sampled. Data collection and methods The study triangulated data from interviews, focus group discussions and secondary data elicited from the key policy documents. Focus group discussion The rationale for using the focus group discussion (FDG) in this study was that it offered an opportunity to explore issues that are not well understood or where there is little information prior to the research on the topic. A total of 8 focus group discussions were held with parents and guardians. Using FDG enabled researchers to build the study on the group dynamics and explore the institutional constraints in context. The FDG is reliable and provides valid results that enable the researchers to portray authentic experiences. The principle of theoretical saturation was applied during the focus group discussion sessions where they ran until a clear pattern emerged and subsequent groups produced no new information. In-depth interviews (i.e. key informants and individual) In-depth interviews were held with teachers and administrators in the two districts, and they are distinguished from key informant interviews in the sense that the respondents did not need to be technically conversant with the obtaining educational policies. Inspectors and public service commission workers were purposely selected and are classified as key informants because they provided rich information regarding institutional constraints related to policies. The interviews were audio recorded, and notes were taken during the discussions and interviews. The transcripts from the discussion were transcribed using Microsoft Word 2016. Each interview lasted between 45 and 50 minutes. The principle of theoretical saturation was also applied during the interview sessions where they ran up until a clear pattern emerged and subsequent participants produced no new information. Documentary analysis Data was also collected and analysed using evidence in various key documents. Some documents analysed were school annual reports, reports from the ministry as well as circulars. Again, official statistics were helpful in showing the significant institutional constraints existing in rural secondary schools. Documentary analysis complemented primary data gathered through focus group discussions and in-depth interviews. Data analysis Thematic analysis was used in this study to analyse classifications and present themes (patterns) that relate to the data (Braun & Clarke, 2006). It allowed the identification of themes and patterns inherent in the data (Braun & Clarke, 2006). Thematic analysis is not tied to a particular data collection method or any particular epistemological perspective (Maguire & Delahunt, 2017). An overlap between themes was observed in the study. The contextual exploration of these themes is provided below. The themes were highlighted in the findings and discussion sections as follows: • Education policy is biased towards urban areas. • Inequality is worsened by geographic context. • There is a gap between policy formulation and policy implementation. • Institutional constraints impact people differently even in the same context. • Coordinated intervention strategies are needed to address rural institutional constraints. Findings and discussions of the study This section of the study presents the research findings from the fieldwork where respondents were asked some questions to give their own perspectives. The paper discusses the findings thematically to ensure that there is continuity and smooth flow of ideas. The study gathered the following: Rural areas are domains of backward educational institutions The issue of equitable education in rural secondary schools is still a cause for concern. Learners from poor family backgrounds face challenges in accessing adequate food, a situation that has hindered their access to education. Learning institutions in rural areas face challenges such as limited library resources, thereby negatively impacting the potential of rural learners to excel. It emerged from the focus groups that introducing Information Communication Technology in rural communities could be a panacea to library resource constraints as this would necessitate online learning modalities. Whilst urban schools are privileged in accessing good network coverage, their rural counterparts are rarely connected to the various network service providers. Respondents highlighted that physical libraries could also help to improve pass rates among rural learners. Policy consciousness and institutional constraints There is a pronounced gap in terms of policy literacy regarding educational policies in the country. Whilst policies meant to promote equality in education do exist, the majority of people in these rural areas are not aware and therefore cannot claim the various rights enshrined in Circulars. For example, a number of respondents prophesied ignorance on the provisions of the Ministry of Primary and Secondary Education Circular number 3 of 2019. One parent highlighted: "As parents in this community, we have no idea of the documents like education circulars which urge us as parents that we must ensure that our children should go to school despite the fact that we cannot afford school fees". The foregoing sentiment by one of the parents in the Seke community is a clear testimony that most of the parents are not aware of the new policy, i.e., Zimbabwe Circular 3 of 2019 provisions. Many parents or guardians have children staying at home and cannot afford to pay school fees as well as other demands by schools. They are also not aware of the new policy that says learners are not supposed to be sent back home from schools even when they are in arrears. Curriculum choice and human capital constraints Respondents revealed that in rural secondary schools, subjects are imposed according to the availability of teachers to take up such learning areas. A key informant in the public service commission lamented that despite the learners' wish to take up subjects that are necessary for their growth, they are constrained by the availability of teachers. This scenario is more pronounced in rural schools where there are push factors forcing critical staff to transfer into urban areas. Untrained, relieved and inexperienced teachers have been the anchor of many schools in Shamva District. Respondents concurred that these teachers may not have the craft competency to teach certain subjects and they can actually skip certain topics deemed as challenging. Thus, inadequate human capital in schools has negatively impacted the potential of rural learners. In this light, one respondent from the Focus Group Discussion in Shamva District mentioned that there are no adequate and professional teachers deployed in these communities. Another key informant concurred with the foregoing that rural secondary schools are treated as "breeding grounds" where temporary or student teachers are deployed before they leave to join urban communities. High staff turnover, deployment of untrained and inexperienced teachers as well as general hardships experienced in rural areas has severed against the thrust of equitable education. This observation is supported by the sentiments echoed by the respondents who underscored the need to capacitate rural schools with amenities such as rural allowances, rural electrification and provision of the internet. Feedback from the respondents reveals that the curriculum taught in the schools poses a great challenge towards moulding graduates who are suitable for the 21 st century challenges. As already highlighted, subjects are imposed according to the availability of teachers to take up such lessons. Despite some learners' wish to take up technical and science subjects, their choice is constrained by the availability of competent teachers. In most cases, the teachers will not be available hence learners may fail to get the needed tuition. Technical subjects such as metal work, technical graphics and wood work are important in transforming livelihoods, yet the brain drain as well as high staff turnover have undermined the continuity of such practical subjects. Respondents highlighted that there is a shortage of experienced teachers with the capacity to teach such highly technical areas. Public policy and equality in education As previously observed, a number of parents and guardians are failing to send their children to school in both public and private schools. The main challenge is that both public and private schools demand the payment of tuition and levies. This entails that underprivileged families will be excluded from accessing education, yet Sustainable Development Goal number 4 demands that member states enable every child to be educated. A scan on the available documents revealed that Zimbabwe has no particular legislation for inclusive education of learners. Whilst government policies are consistent with the intent of inclusive education, they are bereft of the much needed implementation. Examples of such legislations, policies and strategies include the Education Amendment Act of 2020, Disabled Persons Act Ch [17:01] as well as various Ministry of Education circulars (Ministry of Primary andSecondary School Strategic Plan, 2016-2020). It is argued that the policies require that all learners, irrespective of race, religion, gender, creed and disability, have access to primary education. Thus, this implies that education for all is not mandatory to students in secondary schools. In addition to the foregoing sentiments, the Disabled Persons Act (1996) does not compel the government to provide inclusive education in any concrete way. Among the secondary schools in rural areas, it was observed that the majority do not have the special facilities that are meant to enable effective learning for the disabled. Therefore, it can be concluded that the disabled students are being sidelined as the school environment discriminates them. It is also noted that the Disabled Persons Act precisely prevents citizens with disabilities from suing the Zimbabwean government regarding government facility access issues that may impair their community participation. One school head had this to say: "I have been at this school for quite some time and l have realised that in the absence of any mandatory order stipulating the services to be provided, and by whom, how, when, and where, there could be no meaningful educational services for learners with disabilities in Zimbabwe of which those in secondary schools within rural communities are the most affected. This requirement for education for all does not extend to secondary school level, perhaps the Zimbabwean government feels literacy is achievable at Grade 7 level or it views that a high school education is a privilege, rather than a right". This observation trashes the attempt to empower learners who aspire to access quality education in rural areas. Public policy provisions have not been used to effect positive changes in line with improved access to education in the country. This is more pronounced for secondary school learners who end up leaving the country as illegal immigrants or join the informal sector. Special needs and inclusive education in rural schools Whilst there is a need to promote inclusive education in schools, it was observed that parents have a stigma against people with certain forms of disabilities. Some parents actually prefer to keep their disabled children at home, thereby disadvantaging them. Learners with special needs are segregated overtly and covertly. The in-depth interviews elicited that some parents are unwilling to consider disabled children as equal beings. According to the focus group discussions, there are some stereotypes that make rural learners sceptical about learning in the same class with people having albinism, epilepsy or any other form of disabilities. This indicates that inclusive education is still yet to be recognised due to lack of understanding and acceptance. The evidence from the respondents indicates that the level of appreciation of inclusivity among both parents and learners is still minimal. While inclusive education has been defined differently, it involves identification, minimization or elimination of barriers to students' participation in traditional settings (i.e., schools, homes, communities and workplaces) and the maximisation of resources to support learning and participation (Shava, 2020). According to the Committee on the Rights of Persons with Disabilities (as cited in Shava, 2020), inclusive education means: • A fundamental right to education. • A principle that values students' wellbeing, dignity, autonomy and contribution to society. • A continuing process to eliminate barriers to education and promote reform in the culture, policy and practice in schools to include all students. Thus, this study established that constraints faced by learners living with some form of disability are more pronounced, and they suffer a double burden given that they learn in rural schools that are generally underresourced. Environmental constraints to child performance Parents also shared their sentiments with respect to social amenities in rural areas. It was gathered that learners in rural secondary schools continue to express dissatisfaction in the manner in which distribution of health care facilities are done in the wake of Covid-19 epidemic. One parent interviewed had this to say: "As long as the unavailability of proper health care facilities in rural schools still exists, learners remain exposed to vulnerable conditions, hence affecting their participation in their academics". The foregoing sentiments were supported by various authors who have expressed that some of the institutional conditions that hinder learners' learning include cultural background, psychological problems, curriculum changes and allocation of subjects to teachers without considering their areas of specialization (Munaka, 2016). The author also adds school climate, curriculum change, teaching methods, availability of teaching aids, assessment methods, learners' discipline, school culture, overcrowding in classes, motivation and students' background as factors that require attention to ensure academic success among learners. These challenges also have spillover effects on the conduct of teachers, and it is argued that academic performance is affected by several factors which include the attitude of some teachers to their job (Shava, 2020). Thus, the attitude is reflected in their poor attendance to lessons, lateness to school, passing of unsavory comments about students' performance that could damage their ego and poor methods of teaching directly affect students' academic performance (Ebrahim, 2009). School culture and climate are the heart and soul of the school and its essence draws teachers and students to love it and to want to be a part of it as well (Ebrahim, 2009). The authors further posit that the type of school a learner attends is a factor that has profound influence on his or her academic achievement. School culture affects teaching effectiveness. In this sense, Wheeler and Richey (2005) posit that schools that create learning environments that are safe and supportive for both learners and teachers ensure high teaching and learning outcomes. Opoku-Asare and Siaw (2015) simply describe rural areas as deprived, lacking so many government developmental interventions such as potable water, electricity, good roads and school infrastructure to improve upon the lives of the people. Without first improving access and quality of education, sub-Saharan African countries cannot attain the level of growth needed to reduce poverty in with Agenda 2030 goals. Learners' Experiences in Rural Secondary Schools Learners in rural areas walk long distances to get tuition as has already been noted. This affects their level of performance as they struggle to concentrate. According to IFAD (2001), 18% of the rural dwellers are at least 1 km away from the nearest water source, and 32% live 22 km from the nearest school and health Centre. Long distances covered by both teachers and pupils have double effect. Learners from rural communities perceive themselves as inferior to those in urban areas. Teachers fail to handle their normal teaching load as they reach the classroom tired (Nelson Mandela Foundation Annual Report, 2005). Similarly, pupils tend to lose concentration, and in many cases doze off while the class is in session (Nelson Mandela Foundation Annual Report, 2005). In addition, access to schools becomes even more difficult during the rainy season as most meeting places and venues become inhabitable. Grass thatched buildings tend to either leak or water finds its way easily into buildings due to poor drainage. Walking from home to school becomes a problem as most rivers and streams get flooded making it difficult for children to access education. Infrastructural challenges and segregation in rural schools Feedback from the respondents indicates that in rural secondary schools, there are no adequate classrooms and some learners work in open air or under trees. The respondents felt that rural secondary schools are treated as less important and receive little support from the government to erect the requisite infrastructure. The obtaining challenges are being encountered despite the fact that the Zimbabwean government has aligned its Education Amendment Act of 2020, to the country's Constitution amendment number 20 Act of 2013 (Fambasayi & Moyo, 2020). The new Act envisages that every learner has the right to quality and inclusive education, but some learners still face institutional constraints such as lack of Information Communication Technology (ICT) devices. The ICT infrastructure has become necessary in the wake of Covid-19 epidemic whereby school interruptions have become prevalent. The Zimbabwe Education Amendment Act of 2020, has fairly extensive provisions to protect, respect and fulfil the right to education for all children. It addresses issues pertinent to education, including the prohibition of expelling pregnant girls from school, free and compulsory education, sexual and reproductive health issues and the rights of learners with disabilities (Kashaa, 2012). Responses from the study indicate that girls face stigma after falling pregnant and most rural secondary schools have little or no capacity to handle such psychological challenges. The interventions that can be implemented to address the institutional constraints Participants in the study gave their insights on the possible interventions that can be made to improve the obtaining situation for underprivileged communities. It emerged that there is a need to introduce some concrete solutions that will enable effective learning. One such intervention is the introduction of teacher motivation modalities. Giving monetary as well as non-monetary incentives to teachers in rural areas was cited by both parents and teachers as a possible solution to the challenges of high staff turnover. The education mission appears to be reliant on the way teachers feel about their work and how satisfied they are with it (Kielblock, 2018). Therefore, it is not surprising that researchers suggest that "schools must give more attention to increasing teacher job satisfaction" (Finger, 2016;Dinham & Scott, 1998;Heller et al., 1993). There is a need to improve the infrastructure in rural areas through the construction of community libraries, building more specialised units such as disability centres, building laboratories and providing high-speed internet. While governments hold the main responsibility for ensuring the right to quality education, the 2030 Agenda is a universal and collective commitment to transformative education (Shava, 2020). Schools could use departmentally driven programmes to launch their own school-based programmes. Similar studies have established that there was abuse of the cascade model of Teacher Professional Development, which displayed a need for capacity-building and a change of attitude for teachers so that they could use the model to benefit them (Nhlumayo, 2020). African countries must find the political and national will to deal with the constraints of rural education (Chakanika et al., 2012). There is also a need to ensure that there is close monitoring of schools by school heads, school inspectors and school development associations. Collaboration among these key stakeholders will enable relevant and timely feedback that will also improve the prospects of overcoming institutional constraints. This is in line with the theoretical framework adopted in this study where various parts in the system complement each other to ensure maximum results. Conclusion A number of themes were extracted from the analysed data that enabled the researchers to explain the challenges that affect learner performance and teaching in rural secondary schools in Zimbabwe. The key themes that emerged as institutional constraints towards learner performance and teaching in Seke and Shamva districts were the nature of the teaching and learning environment, institutional factors that relate to teachers, learners and head teachers and lastly policyrelated factors. As discussed above, institutional constraints affecting child performance in secondary schools within rural communities are still prevailing. A number of recommendations are also given so that several stakeholders, especially those that have a mandate to address institutional constraints affecting child performance are engaged. However, the multiple impacts including lack of resources, lack of motivation among teachers and psychological challenges of learners have led to serious problems for rural school learners' academic achievements. Teachers play a pivotal role in any education system, and they are the most important determinant of student learning in the classroom, yet as a result of the poor environment under which teachers work, especially in rural areas, the quality of education has been adversely affected. Finally, teacher training institutions should have programmes to prepare teachers for the conditions of rural teaching. These may include life skills orientation tailor made for rural areas and business models that will enhance their income generation and enhance their capacity to adapt in rural and remote areas. In summary, it is clear from these views that as a result of these numerous problems affecting academic performance of our educational system, the system has failed to not only ensure mass participation but also practice discrimination in manpower distribution and infrastructural facilities. However, evidence gathered from the related literature identified the preference of urban schools to rural schools. The inequalities are made worse by differences in the quantity of teachers, educational facilities and other inputs between schools serving different geographical areas. A lot needs to be done to ensure the institutional constraints in rural secondary schools are addressed. Recommendations It is recommended that institutional constraints in rural secondary schools should be holistically addressed to afford equitable and inclusive education for learners. Thus, infrastructure for rural learners must be highly prioritised. The ICT and libraries for learners are recommended especially in the wake of global epidemics such as Covid-19 to cover the gap of learning interruptions. In addition, there is need to ensure that rural secondary schools are equipped with necessary health care facilities for learners. Awareness on the dictates of the educational policies is key as there is little or no attempt by the government to inform communities, parents and learners that they can benefit through free education as per Ministry of Education Circular number 3 of 2019. As part of interventions, schools could use departmentally driven programmes to launch their own schoolbased programmes. Introduction of homegrown solutions such as regular hot meals to students for free and/or at reduced prices during lunchtime may also help learner's interests to attend school.
2023-01-02T16:18:44.969Z
2022-12-31T00:00:00.000
{ "year": 2023, "sha1": "805e20b59afb4683bb254e9321142410c3bf578b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/2331186x.2022.2163552", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d49e606d64fac4bf5b962f1c302813815ad0b2ac", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
24071936
pes2o/s2orc
v3-fos-license
Osteoarthropathy in mucopolysaccharidosis type II. INTRODUCTION Mucopolysaccharidosis type II (MPS type II, Hunter syndrome) is a rare (~ 1/1500.000), X-linked inherited disorder (affects boys) due to deficiency of the lysosomal enzyme iduronate sulfatase (Xq.28). The complex clinical picture includes osteoarthropathy with a tendency to flexion stiffness and disability. In our country, the specific diagnosis and enzyme replacement therapy (ERT), are recently available in the Center for Genetic Pathology Cluj. OBJECTIVES Assessment of clinical features, radiological and imaging of osteoarthropathy in MPS type II and their evolution under ERT. MATERIAL AND METHODS The study included 9 male patients with a suggestive clinical picture of MPS type II; the diagnosis was confirmed by enzymatic assay and the patients were treated with ERT. Osteoarthropathy was assessed before treatment: a) clinical tests (joint goniometry, walking test) and b) radiology (X-rays of the hand and wrist, spine and pelvis), bone densitometry in five patients. Clinical tests were repeated after therapy. RESULTS Chronic osteoarthropathy was present in all patients. Joint mobility was reduced with quasi stationary trend after 12 months of treatment. The walking test was improved after treatment. Radiological assessment revealed: hand bones changes, delayed bone age, vertebral changes, pelvis changes, kipho-scoliosis and aseptic necrosis of the femoral head in 100%, 88%, 88%, 55% and 11% respectively. Bone mineral density was normal in five of the nine patients evaluated. CONCLUSIONS Chronic osteoarthropathy with flexion stiffness is an essential component of the clinical picture of MPS type II. ERT allows an improvement/arrest of evolution (depending on disease severity and time of initiating therapy). Introduction Mucopolysaccharidosis type II is an X-linked inherited disorder caused by deficiency of the lysosomal enzyme iduronate-2-sulfatase, owing to a mutation in the I2S gene located on the long arm of the X chromosome. As a result of this deficiency, the glycosaminoglycans that accumulate in lysosomes are dermatan sulfate and keratan sulfate [1]. The disease occurs exclusively in males, is a multisystemic disease and the clinical presentation is variable from mild to severe forms, although the genetic enzyme deficiency is the same. The clinical phenotype consists of: dysmorphic features (enlarged head, coarsening of facial features, enlarged tongue), hypertrophic tonsils and adenoids, hepato-splenomegaly, abdominal and/or inguinal hernia, cardiovascular and musculoskeletal involvement and variable intellectual delay [2]. The disease is vary rare, the incidence in Europe is 1/140000-156000 neonates [2]. The skeletal involvement "dysostosis multiplex" is due to a defect in bone production and its manifestations are: short stature consecutively growth zone involvement of the long bones and bone deformities of the skull, chest and spine (kyphosis, scoliosis), hip dysplasia [3]. The arthropathy is determined by the glycosaminoglycans' storage within the cartilage and connective tissues and inflammation [4]. It is progressive and the clinical manifestations are joint contractures. Enzyme replacement therapy with idursulfase obtained by recombined DNA (Elaprase), 0.5 mg/kg/ dose, i.v., one dose weekly was approved in 2006 [5]. The treatment efficiency is conditioned by the time of the onset. The specific diagnosis of this disease has been possible since 1997, the disease is underdiagnosed and the treatment is started usually late. A better understanding of the clinical manifestations and the arthropathy characteristics could lead to a earlier diagnosis of this disease. Objectives The authors proposed to present the clinical and radiological characteristics of the arhtropathy in mucopolysaccharidosis type II and their evolution under enzyme replacement therapy. Subjects and methods Nine males were evaluated by complete clinical examination, iduronate-2-sulfatase blood enzyme test (leukocytes). This enzyme is presented in all types of cells of the body, apart from the mature red blood cells, so its activity can be determined into variable types of cells, plasma and serum. The determination of the enzyme activity by the fluorimetry method was done at Sahlgrenska University of Medicine Sweden and at University of Medicine Mainz, Germany [6]. One patient died, six patients are undergoing enzyme replacement therapy with Elaprase (for one year -5 patients and for 9 years -one patient who participated in a therapeutic trial before introducing therapy with Elaprase in Europe) and 2 patients newly diagnosed who will initiate the therapy soon. The arthropathy was evaluated by: a) clinical tests (joint goniometry, 6-minute walking test) and b) radiological and imaging tests (hand and wrist, spine and pelvis) in all patients, osteodensitometry before treatment in 5 patients (DEXA General Electric Lunar Prodigy Advance). Goniometry and walking test are indicated for assessment during treatment. The mobility of the joints was assessed in a sagittal plane for the following joints: shoulder (flexion), elbow (flexion, extension) and wrist and digits (flexion, extension) [7,8]. Five patients were assessed before treatment (one who did not undergo treatment and 4 patients who underwent treatment); two patients were not assessed because of their lack of compliance. A six-minute walking test measured the distance traveled by the patient back and forth during 6 minutes [9]. The test was done before treatment in 3 of 8 patients: in 2 patients before treatment and after one year of enzyme replacement therapy and in one patient before treatment. The others five patients did not perform the test, 4 patients could not perform the test because of neuromotor delay, immobilization in a wheelchair because of aseptic necrosis of the femoral head and lack of compliance (one, one and 2 patients respectively). A patient who was undergoing the treatment could not be assessed before treatment because he was not under our observation at that time. The study was conducted with the approval of the ethics committee of the Emergency Hospital for Children Cluj. Results The age at clinical onset, at diagnosis and at starting therapy are presented in figure 1. a -age at clinical onset b -age at nonspecific diagnosis c -age at specific diagnosis d -age at starting therapy The joint goniometry before treatment (in 5 patients) and after one year of treatment (in 3 patients) are presented in table I. Six-minute walking test results are presented in table II. Discussions The osteoarticular involvement is a very important sign which with dismorphic features can guide the diagnosis of MPS type II. It is progressive and represents a major cause of disability for these patients. In our group the median age at clinical onset was 2+/-0.8 years, which is comparable with the age of 1.8 years, reported by Wraith et al. [10]. The median age at specific diagnosis was 5.1 years and at treatment onset was 10.8 years. The difference between the age at diagnosis and the age at starting therapy (5.7 years) is explicable because the enzyme replacement therapy has only been available in Romania since 2011, 5 years later than when it was approved in Europe. The arthropathy was accompanied by dismorphic features and obstructive cardiomyopathy in all patients; 66.6% of patients presented limited/intellectual delay; 55.5% of patients presented short stature and 22.22% of patients had ear prosthesis. The joint mobility before treatment revealed a reduction in all joints evaluated and after 12 months of treatment showed stationary values, the same as the results reported by other studies [11]. So we can conclude that the enzyme replacement therapy prevents the worsening of the disease but does not improve the already established joint changes. The distance walked in 6 minutes was 363.5 m, comparable with 362 m, reported by Link et al. [9]. After 12 months of treatment the distance walked in 6 minutes increased with 56.9 m and 98.2 m in 2 patients who were evaluated before and after treatment. Other studies reported a smaller distance (37 m), but because of the large number of patients enrolled (94 patients and 96 patients respectively), a comparison of the results is not permissable [11,12]. The prevalence of the radiological changes (table I): hand bones changes; delayed bone age; aseptic necrosis of the femoral head (100%; 88%; 88%; 55% and 11%) is comparable with that reported by Link et al. [11]. The bone density was normal in 4 patients, because of their age and short duration of disease evolution [13]. The single patient with under normal values was the oldest of the group. Conclusions Chronic arthropathy with joint stiffness is an essential component of the clinical manifestations of MPS type II and represents a predictive factor for disability and impaired quality of life.
2016-05-04T20:20:58.661Z
2013-08-05T00:00:00.000
{ "year": 2013, "sha1": "c3e890dacd634e57b878d96c3d7f3da0ff09ec4c", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c3e890dacd634e57b878d96c3d7f3da0ff09ec4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5133405
pes2o/s2orc
v3-fos-license
Panel discussion Power transformers are a mature technology that is both durable and robust, and has wide application in paper mill and other manufacturing industries. Often due to the appearance of invincibility, the electrical power distribution system and the power distribution transformers within these systems are either neglected or not monitored. Two basic types of power distribution transformers are Dry Type and Oil Filled Type. While the dry type are nominally easy to access, inspect and maintain, the inherent nature of the sealed tanks of oil filled transformers provide a unique challenge to determine current condition, inspection, and overall life cycle aging. As responsible managers of company assets, it is important to ensure all parts of the electrical system are monitored and maintained especially with components such as power transformers that are several thousand dollars in employer investment and not in general a quick off the shelf replacement. For many applications, there is a great amount of information regarding the proactive maintenance and monitoring of oil filled power transformers. I will discuss application of the ALARA concept to the evaluation of vesicoureteric reflux and, specifically, the use of the voiding cystourethrogram. The VCUG is the traditional method to screen children at risk for vesicoureteric reflux. It provides detailed anatomy of the bladder and urethra and if there is reflux, of the ureters as well. The examination is performed mainly in children who are at the greatest risk of harmful effects of ionizing radiation. Therefore, it is important to minimize dosage while achieving diagnostic accuracy. The objectives of my presentation are to review some of the technical advances in the diagnosis of reflux and to propose some recommendations for the evaluation of four groups of children: those who were diagnosed with urinary tract infection, siblings of children with reflux, children diagnosed with antenatal hydronephrosis, and children with one solitary functioning kidney. With regard to the technical advances which were nicely reviewed earlier today, there has been enormous improvement in the last couple of decades in reducing radiation dose in patients studied for vesicoureteric reflux based on the use of pulse fluoroscopy. This technique results in roughly a 90% dose reduction with minimal loss of resolution. Digital fluoroscopy and limiting the number of spot images enables us to maximize image save acquisitions and thereby decrease the radiation dose. The leading alternative study to the VCUG in diagnosing reflux is the radionuclide cystogram, or RNC. Its main advantage has been decreased radiation exposure. Its use in our hospital has resulted in about one-tenth the dose that children would be exposed to for the VCUG. The sensitivity of the RNC is at least as great and potentially greater than the VCUG, although the anatomic detail in the RNC is not as good as that in the VCUG. Indirect cystography has been proposed by some with intravenous use of DTPA as a nuclear medicine study. More recently, ultrasonic indirect cystography has been studied, with the main advantage being avoiding catheterization of the child. Whether we are dealing with the nuclear medicine approach or the ultrasound approach, the high false-negative rates limit the effectiveness of the study such that it has not been embraced by the pediatric urologic community. More recently the voiding urosonogram has been promoted primarily in European centers. It is an intriguing technique and has been successful due to the development of an intravesical contrast agent. The voiding urosonogram has a 92% concordance with diagnosis of reflux compared with both the VCUG and RNC. It has good sensitivity and equally good specificity. But similar to the RNC, the voiding urosonogram provides no comparable anatomic detail to the VCUG of the bladder, ureter or urethra. Another exciting recent advance is the MR cystogram. Images of the lower urinary tract are obtained before and after intravesical gadolinium administration and before and after voiding. Its advantages are that there is no additional radiation and it can potentially evaluate the kidneys for reflux nephropathy. The limitations are that the MR cystogram is less sensitive than the VCUG, and in young children sedation is likely to be required. This technique is currently regarded as experimental. For children with urinary tract infection, a VCUG has been regarded as fundamental to their assessment. It enables us to determine bladder and urethral anatomy, bladder capacity, the ability of the bladder to empty effectively, the presence of reflux and the severity or grade of reflux. We know that reflux in conjunction with urinary tract infection can result in renal damage. At its worst, it can result in end-stage renal disease. Reflux is present in 30-50% of children with febrile urinary tract infection upon initial evaluation of children with febrile infection. Up to 40% reported with reflux have renal scarring. Thus, this is a very important diagnosis to make. Once reflux is diagnosed, the likelihood of its spontaneous resolution is determined by the age of the reflux, the laterality of reflux, and its grade. Therefore, the VCUG is of prognostic importance. The indications for ordering the VCUG are very much tied into one's definition of urinary tract infection. We regard a catheterized specimen or suprapubic aspirate of urine as the gold standards. Greater than 10 5 colonies of midstream urine will be 80% diagnostic of a true urinary tract infection. This colony count on two consecutive urine samples is 90% diagnostic. A bag specimen is only diagnostic if there is no growth, as positive growth is likely to reflect perineal flora. The greatest risk of renal damage in all children both males and females is prior to toilet training. Therefore, aggressive screening is justified for these children and the VCUG would be the study of choice. In prepubertal males with a well-documented urinary tract infection, the VCUG is our choice to define urethral anatomy in addition to assessing for bladder anatomy and ruling out reflux. If the urinary tract infection is less well documented, it is reasonable to perform an ultrasound scan initially as a compromise. This may be the case in uncircumcised males. The key is that in order to image the urethral anatomy clearly, the VCUG is the necessary study. In prepubertal females, if they have pyelonephritis, VCUG would be the study of choice because the likelihood of high yield is greater. If they simply have recurrent urinary tract infections or cystitis, we would regard the RNC as the initial study of choice. For postpubertal children, there is a minimal risk of renal scaring. Therefore, no imaging is necessary in the presence of cystitis only. Similarly for black children, there is a low incidence of reflux. For post-toilet training black children presenting without fever, cystography is not regarded as necessary and an ultrasound would be regarded as a reasonable initial study. Sibling reflux, a topic that has received more attention within the last decade or two: the incidence of sibling reflux is 32%. If the sibling is under age 2 years, the incidence of reflux is 44% and drops to 9% if they are over age 6 years. The incidence in males is essentially equal to that in females. In addition, twins are at higher risk for sibling reflux, and monozygotic twins are at particularly high risk. Two-thirds of sibling reflux will be low-grade; half of it will be unilateral. There appears to be a higher resolution rate and an 11% lower incidence of renal damage than in symptomatic reflux. The goal of sibling screening is to prevent renal damage. One can consider four groups of children for screening: those in the newborn to toilet-training age, those in the age range between toilet-training and puberty, postpubertal children, and then the symptomatic sibling. For newborns until they are toilet-trained, it is reasonable to perform a RNC for both males and females. It is a sensitive study with low radiation dosage. For the second group of toilet-trained children up until puberty an initial ultrasound scan is a reasonable anatomic assessment. If there is an abnormality in terms of discrepancy in renal size, renal scarring, a dilated ureter, or urothelial thickening, then one can move to an RNC. Given the relatively low prevalence of reflux in this age range, we would not jump to an RNC initially. For the postpubertal child, we would consider that no studies are necessary whereas some would advocate an ultrasound scan. Symptomatic children should be studied as aggressively as any child with a urinary tract infection would be studied. Of the four topics that we are discussing, the one for which it is probably the most difficult to make firm recommendations is the patient with antenatal hydronephrosis. This is now being diagnosed in 1-2% of all pregnancies. Yet the clinical relevance of this finding is unclear and there are no large prospective studies that truly assess this properly. In terms of the postnatal evaluation of children with antenatal hydronephrosis, the majority of pediatric urologists would say that moderate to severe antenatal hydronephrosis that persists postnatally warrants a VCUG. For mild antenatal hydronephrosis that persists or resolves in the postnatal period, it is controversial as to whether or not the VCUG is warranted. There certainly is a higher risk of reflux in the antenatal hydronephrosis population than in the general population. Roughly 15% of these patients, and males in particular, appear to be at higher risk for bilateral high-grade reflux. Thus, one might lean toward being slightly aggressive with the male. Our recommendations would be that for bilateral severe antenatal hydronephrosis or the solitary kidney with any grade of antenatal hydronephrosis, a postnatal ultrasound in 1-2 days would be advisable. For all other grades of antenatal hydronephrosis, a postnatal ultrasound scan within the first month of life would be fine. For males and females with moderate to severe antenatal hydronephrosis, we would advocate the VCUG. For newborns with persistent mild antenatal hydronephrosis, we would recommend an ultrasound scan and follow those children sonographically. Other colleagues around the country would regard a VCUG as appropriate, but the majority of us favor sonography in this setting. Newborns with resolved, mild antenatal hydronephrosis may or may not require further imaging. We propose that they probably do not. Others feel that some imaging at a later point, at 6 months or a year, would be appropriate. This is a controversial area. With regard to the solitary kidney, we really are discussing those children who are diagnosed with multicystic dysplastic kidney as well as renal agenesis. Contralateral reflux has been found in a small number of studies in 13-28% of those with multicystic dysplastic kidney, certainly higher than in the general population. The majority of reflux in these studies was mild to moderate with a high spontaneous resolution rate. For renal agenesis, which of course may represent involuted multicystic dysplastic kidney, the rate of contralateral reflux is comparable at 5-24%. Thus, in patients with a solitary kidney, there is a higher incidence of reflux than in the general population. The stakes are high for these patients in that reflux of nephropathy of the involved kidney could be devastating. Therefore, we believe that the RNC, or potentially the voiding urosonogram should it become common clinical practice in the United States, would be very advisable. Once reflux is diagnosed, it has been the tradition that most pediatric urologists reassess patients annually. This practice is for a variety of reasons. One is that it is easy for a patient to remember and it is fairly easy for office staff to schedule an annual follow-up. The issues involved with regard to follow-up of the study of reflux are radiation exposure, repeated instrumentation of the child, which is a major issue for many families, antibiotic exposure, and finally cost. On the other hand, you could wait for a long time to determine that reflux has resolved. There was an interesting study in Pediatrics by Thompson in which he created a model for reflux resolution based on the literature. He proposed, based on his model, that for low-grade reflux, grades 1 and 2, a VCUG could be performed every 2 years and that for moderate to severe reflux, grade 3 and above, the VCUG could be performed every 3 years. With this approach the yield would be a 19% reduction in the number of VCUGs and a cost reduction of 6% with a higher antibiotic exposure in 26%. There are certain problems with this study as you can imagine but the concept is a worldly one. Our feeling based on analysis of this study and consideration of our own practice is that for low-grade reflux, one should continue studying these children annually because the likelihood of resolution is high. If the first study that is done is a RNC, then the follow-up study should be a VCUG whether that should happen in a year, 18 months or whenever. It is important at some point to obtain anatomic detail. For those children with moderate to severe reflux greater than or equal to grade 3, one should consider follow-up at 18 to 24 months until resolution, depending upon the logistics of the family. If there is bilateral reflux, the more severe side should dictate the timing of follow-up. Following anti-reflux surgery for those children who do come to surgical resolution, we feel that documentary imaging is advisable. This is controversial nationwide. There are centers that no longer study children postoperatively. We feel that it still is valuable to do so and currently prefer the RNC. Should the voiding ultrasonogram become widely accepted, this would be an ideal alternative for studying these children. I would conclude that in upholding the ALARA concept in the diagnosis of vesicoureteric reflux, we have been aided considerably by the technical innovations that were discussed today, by the use of alternative studies which are expanding, by judicious patient selection, our understanding of who actually requires the study in the first place and by careful consideration of what the ultimate timing of follow-up study should be. Dr. John Boyle It was fluoroscopy that introduced gastroesophageal reflux into the United States. I would consider Donald Darline as one of the founding fathers of gastroesophageal reflux in the United States. Gastroesophageal reflux refers to the passage of gastric contents into the esophagus. It can be best conceptualized as three different manifestations in pediatric patients. First, it is a physiologic phenomenon. This has been defined by intraesophageal pH monitoring in asymptomatic infants and children. It is also a very common clinical syndrome in infancy manifested predominantly by chronic vomiting and oral regurgitation. Next, it is a disease called gastroesophageal reflux disease, or GERD, which has gained popularity in all print and television media. GERD occurs when refluxed gastric contents produce symptoms or tissue damage. All of these manifestations have a common mechanism. Reflux is predominantly a dysfunction of the lower esophageal sphincter, the valve at the gastroesophageal junction. Reflux is not caused by a weak sphincter; it is caused by a sphincter that relaxes at times when it should not. This is termed transient relaxation of the lower esophageal sphincter. It is the mechanism for gastroesophageal reflux in premature infants to 90-year-olds. The mechanisms and causes of transient relaxation are not well understood. One of them certainly is gastric distention, which is something that occurs during a fluoroscopy study. This is a depiction of pH monitoring in asymptomatic patients, infants, children, and adults. In pH monitoring, a pH of 4 is considered the critical pH. Gastric contents with a pH that is acidic are capable of producing tissue damage. Reflux occurs about 73 times over a 24-hour period in a normal asymptomatic infant and up to 45 times per 24-hour period in an adult. Many of these reflux episodes are prolonged, lasting more than 5 minutes. The percentage of time that the pH is less than 4 over a 24-hour period is about 11% in infants, dropping down to 6% in older children and adults. So we all reflux and most of us are asymptomatic. Reflux as a clinical syndrome in infancy is extremely common. Of all infants, 50-60% vomit or have regurgitation one to three times per day in the first 6 months of life. In around 20%, this is more than four times per day. This frequency dramatically drops off in the second 6 months of the first year. By 1 year, only 5% of infants vomit one to three times per day, and less than 1% are vomiting more than four times per day. So reflux is felt to be a developmental disorder in infancy and most people do not consider it to be a disease. Semantics is a major problem in reading the literature about reflux. Reflux in infants is a developmental dysmotility that is conceptualized. I tell parents it is an exaggerated birth reflex. GERD occurs when refluxed gastric contents produce symptoms or disease. It is a functional disorder. There are no specific structural, infectious, inflammatory, or biochemical causes for these transient relaxations. You cannot do a culture and define reflux. You cannot do a blood test and define reflux. In most patients with GERD, there is an increased frequency of reflux or prolonged exposure of the esophagus to an acid environment above those physiologic parameters we discussed before. However, and this is where things really start to get confusing, GERD may occur in patients with physiologic reflux. It is best to conceptualize the symptoms or clinical manifestations as esophageal symptoms and extra-esophageal symptoms. In an infant, esophageal symptoms of GERD may include excessive vomiting, unexplained irritability, feeding difficulty or poor weight gain, and sleep disturbance, which are very common symptoms in infants in general. Older children behave more like adults with chronic heartburn, epigastric abdominal pain, oral regurgitation, episodic vomiting, dysphagia, and rarely hematemesis. Everyone knows the esophageal symptoms of reflux. About 2% of children between the ages of 3 and 9 years have heartburn or oral regurgitation weekly and between 5 and 8% of adolescents between the ages of 10 and 17 years have reflux at least weekly. This number jumps up to about 20% of adults being described as having either heartburn or oral regurgitation at least one to two times a week. So a very, very common problem in infancy drops off in early childhood, starts to increase in adolescence and then starts to go back up as an adult. All are caused by the same mechanism but yet the causes of these transient relaxations are not known. This is an area that has blossomed over the last 10 years. The extra-esophageal manifestations of GERD now include chronic cough, chronic sore throat, dental erosions, hoarseness, recurrent otitis or sinusitis, wheezing or chronic asthma and in small babies, apnea or bradycardia. The mechanism for extra-esophageal GERD is primarily felt to be aspiration but not overt aspiration that produces changes in chest radiographs-microaspirations triggering either inflammation or reflux changes in airway resistance, cough, palatal dysfunction. A lot of mechanisms have been described. In most patients, reflux is a clinical diagnosis. Diagnosis is reasonably assumed in clinical practice by a substantial reduction or elimination of suspected reflux symptoms during a therapeutic trial of lifestyle modifications and acid-reduction therapy. However, many physicians still want an objective test. They want to know that GERD or another phenomenon is the cause of clinical concern, especially in a child who has vomiting in excess of four episodes per day or symptoms that suggest esophageal pain or respiratory disease that is not responding to usual therapies. What are the diagnostic tests for reflux? Barium contrast upper GI series, the founding father of reflux, intraesophageal pH monitoring, upper endoscopy with an esophageal biopsy, multichannel intraluminal impedance and technetium sulfur colloid scintigraphy. We are just going to concentrate tonight on barium contrast. Barium contrast. There is no debate that there is a definite role for the upper GI series for the evaluation of chronic vomiting. Fluoroscopy is the test of choice to determine if the patient has an anatomic abnormality in the upper gastrointestinal tract. It allows evaluation of the esophagus for stricture, ring, hiatal hernia, of the stomach size, gastric outlet, and malrotation. It provides a lot of information and helps to solidify a diagnosis of reflux. What are the indications? Well it certainly is not indicated in all vomiting babies. Reflux is indicated in bilious vomiting, forceful or projectile vomiting. Radiologists do not have a lot of say in who gets an upper GI series because of scheduling, but radiologists do have a say in evaluation of the acute vomiter. If a pediatrician, gastroenterologist, or a surgeon is worried about pyloric stenosis, then radiologists have input to say that an upper GI series can be replaced by an ultrasound scan. But in most vomiters, radiologists are stuck. The schedules extend far enough into the future that the infant shows up, what are you going to tell the parent? They are forced to do these studies. A lot of these are done for what is perceived to be forceful vomiting on the parent's part. They are trying to push the doctor to do something different because there are no therapies for gastroesophageal reflux. Feeding difficulty or dysphagia, poor weight gain, or weight loss are reasons for evaluation with barium contrast UGI. Probably the biggest reason that these studies are done is to reassure the parent, not so much the physician, that this child does not have any anatomic abnormality. This is a controversial area in medicine. I think we have to recognize more and more that there are many times when a negative test really does significantly help in the management of a patient. There are many infants with reflux whose parents start to manipulate their diets; start to change formulas, inadequately feed them to avoid the symptom of vomiting. It helps the pediatricians sometimes give a parent more objective evidence that the child does not have a serious problem. It certainly makes sense to see how we can decrease fluoroscopy time and radiation exposure but to not necessarily eliminate the test. There is definite reason to debate whether or not the radiologist should note the presence of gastroesophageal reflux or altered esophageal motility, or delayed gastric emptying during fluoroscopy. There is no standard methodology of doing an upper GI series. Radiologists treat this test differently, even amongst radiologists within the same institution. The volume of barium meal changes. Many radiologists are not very patient with an infant; if they do not swallow the barium, they get a tube. The duration of observation of spontaneous reflux, provocative maneuvers to elicit reflux and fluoroscopy times all vary. The radiologists in this room-and I am sure the physicists, too-view gastroesophageal reflux detected by fluoroscopy as a descriptive phenomenon and acknowledge that reflux detected by fluoroscopy does not equal GERD. But I must try to emphasize that it is important for radiologists to recognize how profoundly descriptive radiography reports impact on clinical management. If you write reflux on a radiography report, I can guarantee you the pediatrician or the family practitioner is going to take that to the bank and that child will be started on pharmacotherapy. That pharmacotherapy is going to be stepped up and people are going to forget that they ever ordered it in the first place. The child will grow up on acid-reduction therapy and we do not know the long-term implications of chronic acid reduction therapy in children. Does reflux during fluoroscopy correlate with GERD? There have been few studies, the majority being adult case series, where pH monitoring is the standard for diagnosis. These have shown low sensitivity and moderate specificity. It is because of this low sensitivity, or the perception in adults, that the concept of using provocative maneuvers was introduced to improve the sensitivity of this test as a test for reflux. As a result, people do abdominal compressions, valsalva maneuvers, positional changes; they will put the patient upright, right lateral prone oblique has been described, rolling from side to side, leg lifting, which I guess is just valsalva, coughing, water siphon test. Basically, if you look at spontaneous reflux in an adult, the sensitivity is in the range of 20-50%, which is a fairly low sensitivity, but the adults report very high specificity. So a patient who does not have GERD should not show reflux on a fluoroscopic examination. By doing provocative maneuvers, the sensitivity increases into the 40-70-90% range, but the specificity decreases; reflux is elicited in patients who have no disease. Pediatric series have been mostly case series; reflux is present in a large percentage of pediatric patients who are studied for any reason. The percentage of patients who have reflux progressively diminishes with age. Reflux is present in a number of children whose symptoms would not suggest its presence, and the height of reflux does not distinguish symptomatic from asymptomatic patients. A study by Cleveland in 1983 looked at spontaneous reflux. He did intermittent fluoroscopy over 5 minutes, reported that the total fluoroscopic time was 15-20 seconds for these studies, so it was sort of intermittent pulse. He found 82%, or basically 80%, of patients in the first 1½ years of life had reflux demonstrated on an upper GI series. It did not really make a difference if these patients were symptomatic or asymptomatic. This was in the early 1980s when upper GIs were still being done to evaluate malabsorption. He got most of his asymptomatic infants from rule-out malabsorption studies. The number of cases of gastroesophageal reflux drops as children age such that by 12-18 years of age, even though the numbers are very, very small, only about 13% of patients are found to have reflux, so the data are more like the data in adults than in adolescents. While there have not been good calculations of sensitivity and specificity in the pediatric age group, it would seem that in infants and very young children, there is a very, very high sensitivity and a relatively low specificity. The sensitivity goes down with increasing age and the specificity goes up. Children with cervical reflux. Again, there did not seem to be any correlation between cervical reflux and whether the patients were symptomatic or asymptomatic. The numbers are very small, but it is the only study that I know of that has really looked at height of reflux with barium studies. Regarding societies, this guideline is from the North American Society of Pediatric Gastroenterology, Hepatology and Nutrition. Our guideline basically states that although there is no doubt that fluoroscopy can demonstrate gastroesophageal reflux, this observation does not equate to GERD. And although fluoroscopy may detect reflux of barium to the cervical esophagus in patients with or without clinical symptoms of GERD, there are presently no prospective data showing that this observation can identify patients with extra-esophageal symptoms likely to respond to anti-reflux surgery. This is the take-home message from the pediatric gastroenterologist. So fluoroscopy in our world should not be prolonged in an attempt to demonstrate gastroesophageal reflux during a barium contrast upper GI series. There are no data to justify prolonged fluoroscopy time to perform provocative maneuvers to demonstrate reflux during a barium contrast upper GI series. Radiology reports should describe the presence or absence of reflux; I think that does help. However, I suggest putting a disclaimer at the bottom of the report saying "recommend clinical correlation before consideration of therapy." I think it is reasonable to put disclaimers on reports. It is OK to say, "Does it fit? Does it fit with what you are seeing clinically?" I liked the radiologists this afternoon who said they actually interview the patients; that is very good. A lot of times the radiologist is talking to the doctor but the technician is talking to the parent, and the parent walks out of the room thinking their child is about to die from reflux. In most cases, a careful history and physical examination are sufficient to diagnosis GERD. The upper GI series is the best test of choice to rule out upper GI anatomical disorders. Upper endoscopy is the most reliable test to diagnose and assess the severity of esophagitis, and that is a whole other story. Esophageal pH monitoring is the most reliable test to document abnormal esophageal acid exposure in endoscopy-negative symptomatic patients, to assess adequacy of acid-suppression therapy, and to correlate specific symptoms with reflux in those who are refractory to the PPI therapy and are being considered for anti-reflux surgery. Notice I did not say that esophageal pH monitoring is the most reliable method to diagnosis reflux. Questions (Q) and Answers (A) Q: I was really surprised by your recommendation to do an ultrasound scan 1-2 days after birth for antenatal hydronephrosis. I always thought that we had to wait past that period for physiologic oliguria before we studied these patients. I would like to hear your comments about that. A. Dr. Diamond: That approach is just for the group who has a prenatal diagnosis of bilateral severe hydronephrosis or solitary kidney with a diagnosis of hydronephrosis. In general, if you do a postnatal ultrasound scan within the first couple days of life, you may fool yourself into thinking that it truly is milder than it in fact is because of the relative dehydration of the child. But the concern here is to not miss the child with values who you want to pick up early. So while in general what you say is true, for this group of patients we think that looking early, and confirming that there is nothing that ought to be done in the acute setting, is reasonable. That does not get you off the hook if things look good to not study them again at about a month of age or so to get a more acute baseline. Q: I noticed that you avoided talking about quantification of severity. What would you consider severe, 8 mm, 1 cm, in prenatal hydronephrosis? A. Dr. Diamond: We use as our criterion of mild hydronephrosis as dilation of the renal pelvis only. Moderate if that hydronephrosis extends into the calyces and severe if there is parenchyma thinning in addition to calyceal dilation. So if there was parenchyma thinning, independent of the size of the renal pelvis, we would consider that severe. Q: When you say mild in terms of renal pelvis, we often see kids who come in with an outside ultrasound and they are labeled as having hydronephrosis when in fact it is just an extrarenal pelvis. How often do you encounter that problem? A. Dr. Diamond: That is a very common phenomenon but we put a lot of stock in calyceal anatomy as an indication that the process is becoming a more severe process. Q: We as physicians interested in the urinary tract are severely criticized by many people for saying that we have really not asked the right questions. It has been pointed out in the literature, I believe your literature, how much it costs to avoid end-stage renal disease in one child with a urinary tract infection; the numbers approach 5-15 million dollars in the literature. So I do not really care too much about scars necessarily; I want to know the outcome. I think we do too many cystograms and I want to know how the outcome of what you described versus not doing the cystogram because I think the real issue is that within 5 years, we will not be doing cystograms nor even ultrasonic cystograms because I think catheterizing is invasive unless there is a real reason to do it. What is your feeling about this? A. Dr. Diamond: This is an opinion that has been in the literature. There was a study from Australia not long ago that voiced this similar opinion. The sense that I have and I think that some of my colleagues have, as well, is that it is exceedingly uncommon for us nowadays to see a patient present in renal failure due to vesicoureteral reflux. My belief is that this is because we are probably doing something right. What does it cost? Is it worth it? Those are questions that I cannot answer. I think that it is a very uncommon phenomenon now that we see that, and I think that is a result of being more vigilant. Undoubtedly, more studies are being done than absolutely need to be done. From our perspective, and our perspective is different from the radiologist's perspective in terms of what your threshold should be for doing this study. It is largely because we as the tertiary consultants do not want to miss pathology and then have the child end up at another hospital and low and behold pathology that may be regarded as significant is found. So, I think our feeling is that it is still important to err on being aggressive when the clinical indications are there, but at the end of the day there are going to be many negative studies in children who were studied who would perhaps do just as well without it. Q: Do you have a strong feeling about the volume of contrast if you put in more than age plus 2 times 30? I am talking about a radiographic VCUG. In most of the situations, the child will just not void with that small a volume. There is the belief that reflux can be induced in an otherwise normal system by over extending them. Do you think that is the case? A. Dr. Diamond: I think that is a hard determination to make because very often children that we evaluate for reflux are dysfunctional voiders who may have gotten into the bad habit of prolonged holding and have developed a pathologically high bladder volume. I think it is important to know what your predicted endpoint is, but sometimes that is not the right endpoint. Q: Dr. Diamond, at what point do you consider reimplanting a child's ureters as opposed to persistent follow-up in VCUG? A. Dr. Diamond: In general, I think of a surgical approach to the problem when there is breakthrough pyelonephritis; when the child is on a proper prophylactic antibiotic and they still become infected and so medical management really is failing. I would consider surgical management if the child has quite a prolonged history of reflux. In general, it is uncommon for me to give up on a child who has severe reflux in less than, say, 4 years. I would normally give a child a fair observation period before I concluded that it was just impossible for them to outgrow reflux. I would also consider surgical management in the older female with persistent reflux. As an example, if we have the rather unusual scenario of an 8-year-old female who presents with a grade 3 reflux, that is a situation where I may wait a year or two but beyond that point, I would regard it as probably not being in her best interest to wait her out longer and think in terms of correcting that surgically. A third scenario is any situation where there is an anatomical abnormality like a Hutch diverticulum, which I regard as a surgical indication. Q: What you mentioned about volume, dropping an NG tube, etc. We think nothing of reproducing a bolus on the child who has a G-tube and has a wrap to try and provoke reflux. Instead of saying the bland statement, "clinical correlation recommended," the statement that you said was beautifully stated "the presence of gastroesophageal reflux is not necessarily indicative of reflux disease." This will help educate the clinicians because some of my pediatricians do not understand this, either. As you say, the way we word it makes them react and start treating the kids. A. Dr. Boyle: I think that is excellent. The way you put it is more factual and yet does throw it back to the pediatrician to think about what you are going to do with this information. Q: I think we were taught that a lot of physicians find it offensive when we say "clinical correlation is suggested" because they obviously saw the patient before they sent them in. A. Dr. Boyle: A lot of pediatricians still send in the patient for you to make a diagnosis of reflux. We have done a tremendous job of educating pediatricians and family practitioners about this disorder. Q: The radiation-producing test that is not done reduces radiation 100%. I think that it is our job when we think a test is ordered inappropriately, and we may well be wrong, to call the physician up and talk to him about it. How do you as referring physicians feel about that? A. Dr. Diamond: I have no problems. The problem is trying to get a hold of the physicians. Q: The other scenario that comes from some people that do not have an understanding of malrotation and bilious emesis will often get a request that says "upper GI and small bowel follow through show the ligament of Treitz." When you call the physician and say, "I don't need to do a small bowel follow through to demonstrate the ligament of Treitz," then they say, "I also want to see whether there's a stricture or something." It just seems like the time that you put into making the call to try and educate people oftentimes it works against you. I am not saying do not do it, but most of the time we have to do the study, anyway. A. Dr. Diamond: I think it is a healthy practice, and in the course of a busy morning when you are an hour behind seeing patients, it might not be welcomed at that particular moment. Whenever I get a call from the radiologist, I will pull out the chart to see why I ordered that test. Sometimes there will be a little piece that was not communicated to the radiologist and he will say, "Fine, that makes sense." Sometimes he will say, "David, an ultrasound will do the job. What do you say we just send the kid over to ultrasound?" I always respect that call because it shows that they are doing a very thorough job. My perspective is that the practice of pediatric urology has changed. It has become a much busier enterprise. We are asked to see more patients in less time and sometimes these details slip through the cracks. Q: What is your feeling about cystoscopically directed VCUG for those cases where you have recurrent infection and the VCUGs that we do have not been positive? I know in your literature that people are now doing cystoscopy and direct the cystoscope by the orifice of the ureters and trying to show reflux. A. Dr. Diamond: I do not believe in it. I do not believe it bears any resemblance to the way things truly work physiologically. We do not do it and we do not believe in it. Q: With regard to gastroesophageal reflux. We have had a change in our chief of surgery and we also are advocating nuclear medicine or pH probe for reflux evaluations. However, we have not seen as good sensitivity with nuclear medicine because there has not been enough material given in the stomach. Thus, they are starting to rely again on our upper GIs and are now demanding that we give a lot of details to what volume we are giving. We are in the middle of a struggle to reeducate. Has that come up with you? A. Dr. Boyle: One of the mechanisms of transient relaxation is gastric distention. If you start to just fill up the stomach, the lower esophageal sphincter pressure will actually start to decrease and the curled diaphragm will relax, resulting in one of these transient relaxations. I know a lot of adult gastroenterologists ask the radiologists to have the patient drink the barium until they feel full and see if they unmask reflux. There is a definite problem with endoscopy because there is no standard technique. I did not get to go into pH probes. We are starting to have a lot of problems with pH probes because they are considered to be the gold standard of esophageal reflux. There is extremely high sensitivity. If you have erosive esophagitis, then you will have a positive pH probe. However, a large percentage of adults are starting to have what is called noninflammatory reflux. That is that they have normal endoscopies but significant heartburn, and that has to do with visceral hypersensitivity. Those patients will often have negative pH probes. If we consider children, I think erosive esophagitis in an infant is extremely rare. When we endoscope babies with reflux, they have histologic esophagitis. There are a number of reports in case series that show symptomatic reflux. Heartburn or regurgitation is described in roughly the 3-to 17-year-old range who have negative pH probes. There are also a number of studies in asthmatic patients with severe intractable asthma who have minimal symptoms of reflux but have positive pH probes. So it is beginning to look like a positive pH probe does not necessarily dictate GERD and a negative probe does not exclude GERD. We have got to find a better way of looking at this issue. Impedance is a potential method, but the problem is going to be finding norms. The nice thing about impedance is that it detects nonacidic reflux so we can investigate postprandial reflux, prolonged postprandial reflux and night-time reflux. On paper it is a nice study but it is still invasive. Q: For both: In ordering a procedure, do you feel that it is your obligation to discuss with the parents what the procedure involves in terms of catheterization, potential pain, radiation exposure, etc., or do you then relegate that responsibility to the radiology personnel? Oftentimes, parents arrive and say, "what, a catheter?!" or "what radiation?!" and there has been absolutely no preparation for these families coming for both procedures. An upper GI may involve a gastric tube. What do you think should be the clinician's responsibility for preparing the family for both procedures that can involve pain? A. Dr. Diamond: I have never as a routine gone into the radiologic details because there are limited times in the day for me to see the patients that I need to see. The better answer now is that in my current situation, this is done by the Department of Radiology with a child life specialist so that actually when these children are scheduled for a study, the family will get a call the night before or the day before. They will get some background information about what the kids will be coming in for and they are well prepared by someone in child life working with the Radiology Department that can spend the time to go over those details with the parents. Most of those details relate to instrumentation as opposed to radiation. Given the number of studies that we order throughout the day, there is not time to go over real issues with the parents. I think it is proper that someone do it but it is not workable for us to do it. A. Dr. Boyle: We do not order that many GIs as specialists, but our nurses do tell the parents that their child may be restrained and if they refuse to swallow the barium would potentially have a nasogastric tube placed, which is at the discretion of the radiologist; and that we are asking the radiologists to give us information and they are going to work to provide that information.
2016-05-12T22:15:10.714Z
1990-09-01T00:00:00.000
{ "year": 2006, "sha1": "6a0934b1c41eebce14ed5f20a15ade550be9f167", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00247-006-0211-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "52f8a5f250ba043877781e63b8ecdf7999bebd06", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
213191784
pes2o/s2orc
v3-fos-license
Modulation of Cellular Biochemistry, Epigenetics and Metabolomics by Ketone Bodies. Implications of the Ketogenic Diet in the Physiology of the Organism and Pathological States Ketone bodies (KBs), comprising β-hydroxybutyrate, acetoacetate and acetone, are a set of fuel molecules serving as an alternative energy source to glucose. KBs are mainly produced by the liver from fatty acids during periods of fasting, and prolonged or intense physical activity. In diabetes, mainly type-1, ketoacidosis is the pathological response to glucose malabsorption. Endogenous production of ketone bodies is promoted by consumption of a ketogenic diet (KD), a diet virtually devoid of carbohydrates. Despite its recently widespread use, the systemic impact of KD is only partially understood, and ranges from physiologically beneficial outcomes in particular circumstances to potentially harmful effects. Here, we firstly review ketone body metabolism and molecular signaling, to then link the understanding of ketone bodies’ biochemistry to controversies regarding their putative or proven medical benefits. We overview the physiological consequences of ketone bodies’ consumption, focusing on (i) KB-induced histone post-translational modifications, particularly β-hydroxybutyrylation and acetylation, which appears to be the core epigenetic mechanisms of activity of β-hydroxybutyrate to modulate inflammation; (ii) inflammatory responses to a KD; (iii) proven benefits of the KD in the context of neuronal disease and cancer; and (iv) consequences of the KD’s application on cardiovascular health and on physical performance. Introduction The presence of ketone bodies (KBs) in all forms of living organisms, including Eukaryotes, Prokaryotes and Archaea, is a consequence of lipid metabolism; in particular, β-oxidation [1]. These low molecular weight intermediates, i.e., acetoacetate (AcAc), β-hydroxybutyrate (BHB) and acetone (Ac), act as an alternative to glucose as energy fuel [1]. Under physiological conditions, the plasma concentration of KBs in humans oscillates around 0.05-0.1 mM, whereas in the conditions of enhanced KB-production caused by prolonged exercise, starvation, carbohydrate restriction/ketogenic diet or insulin deficiency, their level can reach 5-7 mM, and in particular circumstances even 20 mM, a concentration indicative of diabetic ketoacidosis [2]. Although ketoacidosis is a pathological state, nutritional induction of mild ketonemia, due to consumption of a ketogenic diet, intermittent fasting or caloric restriction, proved beneficial in animal models, leading to improved metabolic profiles, extended lifespans and improved neurological responses. In humans, a KD may contribute to alleviating neurological disorders [3]. On the other hand, KD-induced persistent mild ketonemia rises low density lipoprotein cholesterol levels, potentially increasing the risk of cardiovascular disease [4], although the KD-induced rise in low density lipoprotein cholesterol levels is not unequivocally observed in all studies [5,6]. Here, we will discuss the effects of ketone bodies on cellular metabolism, and their link to pathophysiology, while also considering the impact of KB as epigenetic modulators, as there is a large and growing body of evidence demonstrating a role of KB, particularly β-hydroxybutyrate, in the regulation of chromatin histone post-translational modifications (PTMs), and thus in the transcriptional machinery. BHB, also designated as D-3-hydroxybutyric acid, is the most abundant ketone body, constituting around 70% of the circulating KB pool. Quantitatively, BHB is mostly produced by the liver using acetyl-CoA derived from beta-oxidation of lipids. Acetoacetate, a BHB biosynthetic precursor, and its decarboxylation product acetone, are the two quantitatively less abundant-and unstable-ketone bodies. BHB crosses the blood-brain barrier, and can substitute glucose as fuel. Besides the brain, BHB is also used as an alternative source of energy to glucose in all extra-hepatic tissues [7,8]. BHB, besides serving as an alternative energy source to glucose, also acts as a signaling molecule involved in many cellular functions, including epigenetic regulation of gene transcription. The pleiotropic potential of BHB is also related to the occurrence of a polymerized form of BHB, poly-β-hydroxybutyrate (PHB). While PHB was first described in bacteria, in which it is found in large intracellular granules acting as energy stores [9], more recent studies demonstrated the presence of PHB in mammalian cells, where it acts to regulate intracellular signaling, mitochondrial functions and calcium channel activity [10][11][12]. Anabolism and Catabolism of Ketone Bodies Ketogenesis takes place in the mitochondria of perivenous hepatocytes, and marginally in astrocytes of the brain, in Lgr5 + intestinal stem cells and in T-cells [8,[13][14][15]. Hepatic production of ketone bodies is a physiological response to prolonged exercise, starving or reduced carbohydrate nutritional intake, but is also a pathological consequence of beta-cells failing to secrete insulin in diabetes. Under these circumstances, the liver starts producing ketone bodies from acetyl-CoA derived from the β-oxidation of fatty acids [16]. Ketogenesis is promoted when mitochondria fail to provide a sufficient amount of oxaloacetate to condense with acetyl-CoA to form citric acid and enter the Krebs cycle. Thus, acetyl-CoA is funneled through ketogenesis ( Figure 1). In the first step of ketogenesis, thiolase condensates two molecules of acetyl-CoA into acetoacetyl-CoA (AcAc-CoA), which is the substrate for β-hydroxy-β-methylglutaryl-CoA synthase 2 (HMG-CoA synthase 2), leading to the synthesis of HMG-CoA. In turn, HMG-CoA lyase metabolizes HMG-CoA to the unstable ketone body-acetoacetate (AcAc). AcAc is finally converted into stable BHB by D-β-hydroxybutyrate dehydrogenase (BDH1). Due to spontaneous decarboxylation, a fraction of the AcAc pool undergoes spontaneous decarboxylation to yield acetone, which is excreted from the body with urine and exhaled by the lungs-yielding a characteristically sweet, fruity breath. Transport of BHB through the plasma membrane occurs via the monocarboxylate transporter proteins (MCT). Only 3 out of 14 MCT isoforms, MCT1, 2 and 4, are involved in BHB transport. MCT expression is tissue-specific, with MCT1 being ubiquitously expressed, MCT2 being specifically expressed in the brain and kidney, and MCT4 being expressed in skeletal muscle, heart, lung and brain [17,18]. After reaching the mitochondria of the target cells, BHB is metabolized back into acetylCoA ( Figure 1B). In the extrahepatic organs/tissues (i) BHB is converted to AcAc by BDH1. Then (ii) AcAc is incorporated into AcAc-CoA in a reaction catalyzed by 3-oxoacid-CoA transferase (SCOT), the OXCT1 gene product (to avoid a futile cycle, the expression level of SCOT in hepatic cells is very low). In the final step, (iii) AcAc-CoA is transformed by acetoacetyl-CoA thiolase to two molecules of acetyl-CoA, which are consumed in the Krebs cycle or transported to the cytosol for cholesterol synthesis [1] ( Figure 1C). Ketogenesis as A Physiological Response to Starving and Prolonged Physical Exercise and as A Pathological Phenomenon in Diabetes Production of ketone bodies is physiologically tuned to maintain physiological concentrations of BHB in the 0.05-0.1 mM range. Ketogenesis is intensified under conditions characterized by insufficient or inaccessible availability of glucose [19]. Physiologically, ketogenesis is induced by caloric restriction or prolonged exercise, resulting in accumulation and elevation of the circulating level of KBs up to 5 mM [19,20]. After ingestion of carbohydrates, the levels of ketone bodies revert to basal concentrations, as glucose is the preferable source of energy for the organism. In diabetic subjects, increased levels of ketone bodies can occur despite the high glucose plasma concentrations due to defective insulin release and impaired glucose uptake by the insulin-sensitive tissues. Under these conditions, the liver produces ketone bodies to serve the brain, heart and skeletal muscles which, due to insulin resistance and impaired glucose uptake/internalization, cannot rely on glucose supply [21]. Insulin injection may revert KB levels [19]. The in vitro studies performed on skeletal muscle isolated form mice subjected to physical exercise (swimming) for 60 min at 35 • C, have shown that 4 mM BHB significantly improves glycogen repletion in epitrochlearis muscle, the major determinant of exercise performance [22]. In patients with poorly controlled diabetes, increased levels of KB may lead to diabetic ketoacidosis, with KB concentrations exceeding 20 mM. Because of their acidic pH, elevated concentrations of ketone bodies observed in diabetic ketoacidosis affect the electrolyte balance, causing cell damage and dehydration, as the organism will strive to eliminate KB excess via the urine. Untreated diabetic ketoacidosis can cause coma and even death. Epigenetic Effects of Ketone Bodies Epigenetic modifications constitute a key element of regulation of gene transcription. Recent findings suggest that ketone bodies coordinate cellular functions via a novel epigenetic modification-β-hydroxybutyrylation [23,24]-that integrates the classic DNA methylation and histone covalent posttranslational modifications (PTMs), including histone lysine acetylation, methylation and histone phosphorylation and ubiquitination ( Figure 2). In response to high levels of β-hydroxybutyrate, a new type of histone posttranslational modification was identified, lysine β-hydroxybutyrylation (Kbhb), which takes place on specific lysines of histones, but also other cellular proteins, including p53 [25,26]. Using in vitro cell line models, and organs (mainly liver) from mice undergoing long-term fasting or streptozotocin-induced diabetic ketoacidosis, Xie et al. identified 44 lysines in histone proteins susceptible to β-hydroxybutyrylation, including H1K168, H2AK5/K125, H2BK20, H3K4/K9/K14/K23 and H4K8/K12 [25,27]. By genome-wide analysis (ChIP-seq) associated with transcriptional profiling, it was found that β-hydroxybutyrylation of histones produces a transcription-promoting mark enriched in active gene promoters. Moreover, the increased level of H3K9bhb, which occurs during starvation, is associated with genes upregulated in starvation-responsive metabolic pathways. These newly identified histone PTMs represent new epigenetic regulatory marks that link metabolism to gene expression, offering a new avenue to study chromatin regulation and the diverse functions of BHB in the context of important human pathophysiological states. The sequencing data revealed that H3K9bhb defines a set of upregulated genes that differ from upregulated genes bearing the H3K9ac and H3K4me3 marks, suggesting that histone Kbhb has different transcriptional-promoting functions from histone acetylation and methylation [25]. The effects of BHB on the establishment of histone posttranslational modifications other than histone Kbhb are more contradictory, especially with respect to histone acetylation. BHB was initially identified as an endogenous inhibitor of class I and IIa histone deacetylases (HDACs), which affect gene expression and chromatin modification [28]. A dose-dependent histone hyperacetylation, especially on lysines 9 and 14 of histone 3 (H3K9/K14), was identified after BHB treatment of HEK293 cells and in C57BL6/J mice maintained on caloric restriction or with elevated levels of BHB via a subcutaneous pump delivery [29,30]. However, more recent data from Chriett et al. did not confirm the function of BHB as a histone deacetylase inhibitor. Experiments performed on multiple cell lines, including HEK293 cells, myotubes (L6) and endothelial cells (HMEC-1), showed that BHB administration did not increase histone acetylation, and BHB treatment of crude nuclear extracts did not inhibit histone deacetylase catalytic activity [31]. These data are in line with the study performed by Xie et al. that presented a BHB dose-dependent induction of β-hydroxybutyrylation on multiple histone lysines with only marginal changes in the acetylation patterns [25]. Beyond the ongoing discussion regarding the histone deacetylase inhibitory potential of BHB, it was nonetheless demonstrated that some histone hyperacetylation following BHB treatment might be consequential to the increased intracellular acetyl-CoA pool formed by the administration of ketone bodies [32]. Besides promoting histone acetylation, such high levels of acetyl-CoA also increase the acetylation of the mitochondrial proteins [32]. Furthermore, as histone acyltransferase activity is inversely correlated to the length of the acyl chain substrate, histone acyltransferases use BHB-CoA as substrate in a less efficient manner compared to acetyl-CoA. By virtue of this, the relative abundance of lysine-hydroxybutyrylation on histone 3 (H3) and 4 (H4) is underrepresented (less than 1% of the total histone marks) compared to lysine acetylation (15-30%) [27]. A further complexity towards understanding of the overall effect of BHB on the chromatin acetylation patterns is added by the energetic potential of cell (i.e., the NAD + /NADH ratio) that is significantly modified depending on the energy fuel available to the cell: BHB or glucose. Indeed, the production of two moles of acetyl-CoA using BHB as precursor reduces only one mole of NAD + to NADH, while four moles of NAD + (and 4 NADH equivalents) are produced with glucose as an energy source. The excess of the NAD + availability that results from a ketogenic diet likely exerts a positive influence on the redox state of the cell and potentially modulates activity of NAD + -dependent enzymes, including sirtuins, involved in deacetylation processes [32]. Another important aspect of epigenetic potential of the ketone bodies is their effect on the DNA and histone methylation status. Multiple studies showed that the ketogenic diet attenuates the incidence of seizures in epilepsy. However, the biological mechanism(s) whereby ketone bodies relieve the symptoms of the disease remain poorly understood, pointing at adenosine as a putatively relevant molecule curbing epilepsy progression [33]. The anticonvulsive action of a ketogenic diet was observed even after a transient administration of ketogenic therapy, and some long-term protection was apparent even after returning to a normal control diet [34]. Epigenome-wide sequencing analysis revealed significant increases in the DNA methylation levels in the hippocampi of rats suffering from chronic epilepsy [35]. In this model, a ketogenic diet therapy, beyond the attenuation of seizure progression, corrected DNA methylation-mediated changes in gene expression [35]. Detailed analysis revealed that the ketogenic diet increases adenosine's presence, which efficiently blocks DNA methylation [23,33]. The putative contributions of BHB in the shaping of the DNA methylation profile and histone methylation status, seem to be related to the acetyl-CoA pool that, together with glycine, is needed for S-adenosylmethionine (SAM) synthesis. Recent studies performed on epileptic rodents showed that unbalanced dietary protein composition within a ketogenic diet may mask the anti-seizure effects of the ketogenic component of said diet, leading to an exacerbation of the seizures observed in epilepsy, possibly due to threonine deficiency, an amino acid crucial for providing a substantial fraction of intracellular glycine, and in turn, acetyl-CoA and SAM [36,37]. Signaling Pathways Linking Ketone Bodies to Protection from Oxidative Stress Ketone bodies are not only a fat-derived energy supply form for the brain, skeletal muscle or heart under starvation or intense exercise. In 2000, Kashiwaya and co-workers found that BHB can protect neurons from oxidative damage [38]. They found that treating cells with BHB reduced the cytosolic [NADP + ]/[NADPH] ratio and increased reduced glutathione, one of the major low molecular weight antioxidant agents in the cell. Moreover, treatment of neurons with ketone bodies revealed a decreased amount of semiquinone [38]. It was also demonstrated that in cells submitted to a pro-inflammatory stimulation by LPS treatment, BHB inhibited NF-κB by translocation and degradation of IκB-α. As NF-κB regulates expression of multiple pro-inflammatory genes, including iNOS, COX-2, TNF-α, IL-1β and IL-6, the administration of BHB to cells diminished the pro-inflammatory response to LPS [39]. Protection Against Oxidative Stress in Spinal Cord Injury Spinal cord injury is characterized by motor, vegetative and sensitive dysfunction. The chance of recovering from such an injury is very low. A pathophysiological injury of the spinal cord causes an elevation of free radicals, which finally results in damage to surrounding tissues, causing multiple negative effects. Additionally, the blood-brain barrier, which isolates cerebrospinal fluid from blood, prevents infiltration of most antioxidants circulating in blood and does not support the recovery process. KB locally produced by astrocytes can exert a potential antioxidative effect on the spinal cord. It has been shown that ketone bodies can regulate the levels of antioxidant genes, including MnSOD and catalase, or the level of glutathione [29,40]. The ability of these genes to decrease semiquinone may also be considered as an antioxidant action, as it prevents free radical formation [33]. The Impact of Ketone Bodies/Ketogenic Diet on Alzheimer's Disease Alzheimer's disease, the most significant cause of dementia, is associated with impaired glucose utilization in the brain and mitochondrial dysfunction [41]. The energy imbalance caused by the reduced glucose uptake, downregulation of glucose transporters (GLUT1) and inefficient glycolysis, alters amyloid precursor protein processing leading to the production of the neurotoxic amyloid β-peptide and consequential loss of neurons and cognitive deficits [41,42]. Ketone bodies, as an alternative energy source, are often pointed out as a possible rescue window for glucose hypometabolism in neurodegenerative disease. The studies performed on a mouse model of Alzheimer's disease treated with the ketogenic diet showed significantly a decreased level of amyloid β-peptide in the brain and improved mitochondrial function [43,44]. A number of animal studies have shown the benefits of a ketogenic diet: better mitochondrial function, reduced oxidative stress, reduced amyloid β-peptide deposition and ameliorated tau protein pathology [43][44][45][46]. Clinical trials on human volunteers, mostly focused on mild to moderate Alzheimer's disease patients, identified that apolipoprotein E4 (ApoE4) genotype has an effect on the outcome of ketogenic diet intake. Patients without ApoE4 allele (ApoE4(-)) presented improved short-term cognitive performance in terms of memory, language and attention, whereas ApoE4(+) patients were characterized by a reduced response to ketogenic diet treatment [47][48][49][50]. The Role of β-hydroxybutyrate in Ischemia/Reperfusion of Heart and Brain Injury The two most susceptible organs to diminished oxygen concentration are the heart and brain. In both cases, insufficient delivery of oxygen and nutrients, due to arterial/coronary ischemia (in the heart) or artery blockade/leakage (in the brain), leads to severe pathological conditions, ischemic heart disease or brain stroke, respectively. The best way to minimize the induced damage is the rapid and early restoration of circulation in the damaged vessel-termed reperfusion. Paradoxically, an overly-rapid reperfusion leads to myocardial cell death, or lethal myocardial reperfusion injury [51]. Similarly, in brain tissue, reperfusion induces anaerobic glycolysis, leading to accumulation of lactate and promotion of cell death [52]. Studies performed on adult Wistar rats have shown that elevated levels of ketone bodies (AcAc and BHB), as a result of 24 h starvation, decreased ischemic and reperfusion damage in rat hearts [52,53], whereas intermittent fasting of wild type mice decreased by about 50% the infarct size caused by ischemia/reperfusion [54]. Additionally, treatment of mice with BHB caused reduction of the lipid peroxidation product malondialdehyde (MDA) in myocardium tissue [51]. Suzuki et al. have shown that in rats with induced brain ischemia, animals treated with BHB survived longer, and ATP levels in brain remained much higher than in a control group infused with saline [52]. It was also found that administration of BHB to rats with induced brain ischemia diminished the infarct area and edema formation, and decreased lipid peroxidation. Administration of BHB also mitigated neurological defects [52]. These observations prove that BHB is not only an alternative energy source but also a signaling molecule which can modulate the oxidative stress response and other metabolic pathways, leading to antioxidant protective functions in ischemic and neurological disorders. The Protective Role of β-hydroxybutyrate in Hypertension Hypertension, by definition a repeatedly elevated systolic blood pressure exceeding 140 over a diastolic pressure of 90 mmHg, is one of the strongest cardiovascular risk factors. Hypertension occurs very often in association with various medical conditions, including diabetes, obesity and chronic renal insufficiency, but also in association with low physical activity, cigarette smoking and an unhealthy diet [55,56]. Individuals diagnosed with hypertension are recommended to change their lifestyles into more healthy ones, both from dietary and physical exercise perspectives. Experiments performed on rats showed that blood pressure raises with salt content in the diet and is reduced under mild ketosis [57,58]. Dietary administration of the BHB precursor 1,3-butanediol to rats on a high-salt diet reverted blood pressure to values observed in a low salt diet control group. Moreover, administration of 1,3-butanediol reduced the activity of the Nlrp3 inflammasome, a major inductor of the expression of inflammatory factors, such as caspase-1, IL-1β and IL-18 [58]. These results suggest that BHB can modulate the expression of the inflammasome and associated inflammatory genes via histone beta-hydroxybutyrylation, and such histone patterns and resulting gene expression levels alleviate inflammatory responses and lower blood pressure. The link between Ketone Bodies via Nutritional Intake and Physical Performance In healthy adults, the oxidation of ketone bodies provides only a minor fraction of total body energy, but in the heart, brain and skeletal muscles, ketone body metabolism can be significantly increased in physiological conditions such as inter-alia fasting or low carbohydrate diet [59]. In a similar manner, supplementation of medium-chain triglycerides (C8 to C10) increased plasma ketone levels (+19%) while slightly reducing glycemia (-12%) suggesting the occurrence of an alternative fuel use under mild ketosis [60]. An alternative source of ketone bodies is the ketogenic diet, where 85% of total calories come from fat, 10% form proteins and only 5%, or less, from carbohydrates. There are social groups, especially athletes, who strive to reduce body fat, but this goal is often attained through nutritional restrictions that can have serious health consequences. Many studies have confirmed the effectiveness of the ketogenic diet in selective reduction of body fat without significant loss of non-fat body tissues [61]. However, there is disagreement in the scientific community on the perception of the influence of ketogenic diet on athletes' endurance. It seems that the impact of ketogenic diet on aerobic performance depends on three factors: (i) exercise intensity, (ii) the status of body training and (iii) the length of the period of diet habituation. The body responds quickly to dietary changes. A study showed that only three days of high fat/moderate protein diet resulted in a decrease in physical performance of non-trained individuals [59]. Similarly, a significant decrease in endurance was observed after 6 weeks of ketogenic diet period, verified during a 45-minute cycling test [62]. Contrarily to these observations, Cox and collaborators demonstrated increased endurance performance in high-level athletes after the administration of the edible ketone body (R)-3-hydroxybutyl (R)-3-hydroxybutyrate ketone ester, a molecule of choice to achieve ketosis without using free acid BHB, or sodium BHB, which substantially affect the body's acid and salt homeostasis respectively [63]. Athletes react differently to an increased supply of fats. A low-carbohydrate diet leads to physiological adaptation. During aerobic endurance exercise, fat becomes the dominant energy substrate, and the remaining carbohydrate resources remain intact [64]. Such observations were made both after 4 and 20 months of using the ketogenic diet. After 20-months of the ketogenic diet, a group of ultra-endurance runners showed much higher fat oxidation rates and lower oxidation of carbohydrates rates during a 180-minute run [65]. An elevated fatty acid stream leads to development of adaptive mechanisms by active tissues: increased mitochondrial β-oxidation and reduction of glucose oxidation [66,67]. The cellular mechanisms responsible for this metabolic shift are, however, not yet fully understood. It is known that increased supply of fats results in their increased availability to lipid oxidation during aerobic exercise, but we know little about the effect of a ketogenic diet on strength performance. An initial study shows that KD does not improve strength performance compared to a carbohydrate-rich diet [68]. Some studies claim that the ketogenic diet does not expose athletes to performance limitations, especially regarding strength [45]. Lambert and colleagues showed that 2-weeks of a high-fat diet (70% fat) did not reduce the strength of cyclists during an intense workout, and their strength during a moderate-intensity workout was even improved [69]. Zajac et al. provided data on the modulation of exercise metabolism by a ketogenic diet in cyclists. After 4 weeks of, K.D.; an increase in the maximum oxygen uptake and oxygen uptake at the lactate threshold level was observed. This was associated with a decrease in body weight and/or higher oxygen uptake in order to achieve the same energy efficiency as in a mixed diet. The maximum workload and the workload at lactate threshold were significantly higher after KD compared to mixed diet [70]. In conclusion, current discoveries regarding KD and aerobic exercise require further investigation into how the training status affects adaptation to KD and resulting performance [71,72]. Curbing Cancer Progression with Ketone Bodies/Ketogenic Diet Cancer cells need a lot of energy to support their enhanced proliferation rate. While in non-cancerous cells, carbohydrates enter glycolysis to generate pyruvate, which is then funneled into the Krebs cycle and the mitochondrial electron transport chain, tumor cells generate energy mostly by glycolysis. This phenomenon is known as the Warburg effect, which can be considered as an adaptive response allowing for carbons to be shuttled towards anabolic pathways rather than being completely oxidized in the mitochondria (Figure 3). Preference for glycolysis instead of oxidative phosphorylation to produce ATP is explained by defects of glycolytic and ketolytic enzymes in the mitochondria of tumor cells [73]. An alternative approach explaining energy production in cancer cells postulates energy transfer from normal cells in a "reverse Warburg effect" process [74]. In both cases, the tumor microenvironment is acidified, which promotes metastasis. In several instances, but not unequivocally, cancer progression correlates with weight loss. However, there is growing evidence that excessive body mass can also be detrimental to cancer patients [75]. Studies providing support for the anti-carcinogenic effects of the ketogenic diet (KD) implicate that mitochondrial dysfunction of cancer cells, and the concomitantly reduced expression of ketolytic enzymes, may contribute to this effect. When the blood glucose levels are falling, the cancer cells starve, whereas the normal cells change their metabolism to utilize KBs to survive. Moreover, the decrease in insulin level that accompanies the indicated ketonemic conditions correlates with a decrease in insulin-like growth factors, which promotes the proliferation of cancer cells [76]. It was shown on neuroblastoma xenografts in a CD1-nu mouse model that a ketogenic diet consisting of fat (25% medium-chain triglycerides and 75% long-chain triglycerides) and carbohydrates + protein gave the same effective therapeutic effect against neuroblastoma as did the classical ketogenic diet combined with caloric restriction [77]. Besides neuroblastoma, the strongest effect of the KD as an adjuvant cancer therapy has been described against glioblastoma. The currently available medical literature provides evidence for the safe application of a KD only in patients with glioblastoma [78,79]. However, some promising evidence in favor of the KD in the treatment of prostate, colon, pancreas and lung cancers has also been reported [66]. On the contrary, only limited evidence is available on the anti-cancer effect of the KD on stomach and liver cancer. Evidence of anticancer activity of KD was obtained mainly in animal models, while the support of these conclusions in humans was limited to individual cases. Additionally, controversies have arisen about the safety of using the KD. A study investigating renal cancer in a rat model with tuberous sclerosis complex indicated a carcinogenic effect of the KD long-term. Alarming results were also obtained by analyzing the effect of KD on BRAF V600E-expressing melanoma in xenograft mice. In this model, a high-fat ketogenic diet increased the level of acetoacetate in the serum, leading to increased tumor growth in human melanoma cells [80]. One more KD function was observed-synergistic action with chemo and radio-therapy. This is confirmed by studies on animals and a patient with glioblastoma who managed to achieve complete remission after treatment with radiotherapy, a restrictive ketogenic diet and temozolomide [81]. In conclusion, the published data suggest that ketogenic diet may be safely used as an adjuvant therapy for the selected forms of cancer, in addition to the conventional treatment. Due to the physiological differences between animals and humans, the studies on patients with various types of cancer treated with KD are needed [78]. However, it is unlikely that a ketogenic diet could be used as a primary anticancer therapy. Future Directions Research aimed to decipher the impact of ketone bodies within a physiological range on multiple aspects of human physiology and pathology is a clearly expanding field which will undoubtedly further develop in the next few years, as ketone bodies-or the provisioning of endogenously synthesized ketone bodies by a ketogenic diet-hold promise to address a number of pathologies, including neurodegeneration, cancer and metabolic disease. While the precise molecular model(s) of action of ketone bodies will require further investigation, it is now established that ketone bodies, and in particular BHB, directly impinge in transcriptional regulation via epigenetic modulation and modulate inflammatory processes. While findings in animal models are not always reproduced in the clinical setting of studies in humans, it is already proven that the ketogenic diet can be successfully used to treat pediatric epilepsy forms refractory to pharmaceutical therapy [82], as shown in several randomized controlled trials [83,84], retrospective studies [85,86] and a meta-analysis [5]. According to the current literature, it can also be speculated that ketone bodies will prove beneficial in promoting healthy aging and in alleviating the burden of metabolic disease [87], and in some instances, as a useful adjuvant during the treatment of certain cancers [78]. Conflicts of Interest: The authors declare no conflict of interest.
2020-03-19T10:18:01.529Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "9d35198c2db0a6e9e39a9fbeadcdf01f384475ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/3/788/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "188c141de53ef5097f3e930a0ef4bc52a726ef35", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
201672305
pes2o/s2orc
v3-fos-license
Solar system chaos and the Paleocene-Eocene boundary age constrained by geology and astronomy Astronomical calculations reveal the solar system's dynamical evolution, including its chaoticity, and represent the backbone of cyclostratigraphy and astrochronology. An absolute, fully calibrated astronomical time scale has hitherto been hampered beyond $\sim$50 Ma, because orbital calculations disagree before that age. Here we present geologic data and a new astronomical solution (ZB18a), showing exceptional agreement from $\sim$58 to 53 Ma. We provide a new absolute astrochronology up to 58 Ma and a new Paleocene-Eocene boundary age (56.01 $\pm$ 0.05 Ma). We show that the Paleocene-Eocene Thermal Maximum (PETM) onset occurred near a 405-kyr eccentricity maximum, suggesting an orbital trigger. We also provide an independent PETM duration (170 $\pm$ 30 kyr) from onset to recovery inflection. Our astronomical solution requires a chaotic resonance transition at $\sim$50 Ma in the solar system's fundamental frequencies. Numerical solutions 1 for the solar system's orbital motion provide Earth's orbital parameters in the past, widely used to date geologic records and investigate Earth's paleoclimate [1][2][3][4][5][6][7][8][9][10][11]. The solar system's chaoticity imposes an apparently firm limit of ∼50 Ma on identifying a unique orbital solution, as small differences in initial conditions/parameters cause astronomical solutions to diverge around that age (Lyapunov time ∼5 Myr, see Supplementary Materials) [4,6,12,13]. Recent evidence for a chaotic resonance transition (change in resonance pattern, see below) in the Cretaceous (Libsack record) [9] confirms the solar system's chaoticity but, unfortunately, does not provide constraints to identify a unique astronomical solution. The unconstrained interval between the Libsack record (90-83 Ma) and 50 Ma is too large a gap, allowing chaos to drive the solutions apart (see Supplementary Materials). Thus, proper geologic data around 60-50 Ma is essential to select a specific astronomical solution and, conversely, the astronomical solution is essential to extend the astronomically calibrated time scale beyond 50 Ma. We analyzed color reflectance data (a * , red-to-green spectrum) [7,8] at Ocean Drilling Program (ODP) Site 1262 (see Supplementary Materials); a * -1262 for short, a proxy for changes in lithology [7]. The related Fe-intensity proxy [8] gives nearly identical results (Fig. S4). We focus on the section ∼170-110 m (∼58-53 Ma), which exhibits a remarkable expression of eccentricity cycles at Site 1262 (refs. 7, 8, 10, 14, 15), less so in the preceding (older) section. Our focus interval includes the PETM and Eocene Thermal Maximum 2 (ETM2), extreme global warming events (hyperthermals), considered the best paleo-analogs for the climate response to anthropogenic carbon release [16][17][18]. The PETM's trigger mechanism and duration remains highly debated [19][20][21]. Thus, in addition to geological and astronomical implications, unraveling the chronology of events in our studied interval is critical for understanding Earth's past and future climate. We developed a simple floating chronology, attempting to use a minimum number of assumptions (see Supplementary Materials). We initially employed a uniform sedimentation rate throughout the section (except for the PETM) and a root mean square deviation (RMSD) optimization routine to derive ages (for final age model and difference from previous work, see Supplementary Materials). No additional tuning, wiggle-matching, or pre-existing age model was applied to the data. Using our floating chronology, the best fit between the filtered and normalized data target a * * (a-double-star, Fig. 1) and a given astronomical solution was obtained through minimizing the RMSD between record and solution by shifting a * * along the time axis (offset τ) over a time interval of ±200 kyr, with ETM2 centered around 54 Ma (see Supplementary Materials). Before applying the minimization, both a * * and the solution were demeaned, linearly detrended, and normalized to their respective standard deviation (Fig. 1). It turned out that one additional step was necessary for a meaningful comparison between a * * and astronomical solutions. Relative to all solutions tested here, a * * was consistently offset (shifted towards the PETM after optimizing τ) by ∼one short eccentricity cycle for ages either younger (some solutions) or older than the PETM (other solutions). The consistent offset relative to the PETM suggests that the condensed PETM interval in the data record is the culprit, for which we applied a correction, also obtained through optimization. At Site 1262, the PETM is marked by a ∼16 cm clay layer (<1-wt% CaCO 3 ), largely due to dissolution and some erosion across the interval [16,22], although erosion of Paleocene (pre-PETM) sediment alone can not account for the offset of ∼one short eccentricity cycle (see Supplementary Materials). Sedimentation rates were hence nonuniform across the PETM interval [8,10,16] and primary lithologic cycles from variations in CaCO 3 content are not preserved within the clay layer. Thus, we corrected the condensed interval by stretching a total of k grid points across the PETM by ∆z for a total length of ∆L = k∆z and included k as a second parameter in our optimization routine (see Our new astronomical solution ZB18a (computations build on our earlier work [6,23,24], see Supplementary Materials) agrees exceptionally well with the final a * * record (Fig. 1b) Contrary to current thinking [8,14,20,27,29], the PETM onset therefore occurred temporally near, not distant, to a 405-kyr maximum in Earth's orbital eccentricity ( Fig. 1, cf. also ref. 10). As for ETM2 and successive early Eocene hyperthermals [7,29,30], this suggests an orbital trigger for the PETM, given theoretical grounding and extensive, robust observational evidence for eccentricity controls on Earth's climate [2, 7-10, 14, 20, 26-32]. Note, however, that the onset does not necessarily coincide with a 100-kyr eccentricity maximum (see below). Our analysis also provides an independent PETM main phase duration of 170±30 kyr from onset to recovery inflection (for tie points, see Fig. S6 and Supplementary Materials). This duration might be an underestimate, given that sedimentation rates increased during the PETM recovery (compacting the recovery would require additional stretching of the main phase). Our duration is significantly longer than 94 kyr (ref. 20) but agrees with the 3 He age model at Site 1266 (167± 34 24 kyr) [21] and is consistent with >8 cycles in Si/Fe ratios at Zumaia [31], which, we suggest, are full (not half) precession cycles. If high orbital eccentricity (e) also contributed to the long PETM duration (e ≃ 0.025 − 0.044 during PETM), then the potential for prolonged future warming from eccentricity is reduced due to its currently low values (e ≃ 0.0024 − 0.0167 during next 100 kyr). A similar argument may hold for eccentricity-related PETM trigger mechanisms. The PETM occurred superimposed on a long-term, multimillion year, warming trend [7,30]. Our solution ZB18a shows a 405-kyr eccentricity maximum around the PETM but reduced 100-kyr variability (Fig. 1b). Eccentricity in ZB18a remained high prior to the PETM for one short eccentricity cycle (Fig. 1b, arrow), suggesting the combination of orbital configuration and background warming [30,32] forced the Earth system across a threshold, resulting in the PETM. While similar thresholds may exist in the modern Earth system, the current orbital configuration (lower e) and background climate (Quaternary/Holocene) are different from 56 million years ago. None of the above, however, will directly mitigate future warming and is therefore no reason to downplay anthropogenic carbon emissions and climate change. Our astronomical solution ZB18a shows a chaotic resonance transition (change in resonance pattern) [33] at ∼50 Ma, visualized by wavelet analysis [34] of the classical variables: where e, I, ̟, and Ω are eccentricity, inclination, longitude of perihelion, and longitude of ascending node of Earth's orbit, respectively (Fig. 2). The wavelet spectrum highlights several fundamental frequencies (g's and s's) of the solar system, corresponding to eigenmodes. For example, g 3 Notably, parameters required for long-term integrations compatible with geologic observations (e.g., ZB18a vs. a * * , Fig. 1) appear somewhat incompatible with our best knowledge of the current solar system. For instance, ZB18a is part of a solution class featuring specific combinations of number of asteroids and solar quadrupole moment (J 2 ), with J 2 values lower than recent evidence suggests (Supplementary Materials). In addition, the La10c solution [33] with a small RMSD (see Table 1) used the INPOP08 ephemeris, considered less accurate than the more recent INPOP10 used for La11 [13]. Yet, La10c fits the geologic data better than La11 (Table 1 and ref. (27)). The resonance transition in ZB18a is an unmistakable manifestation of chaos and also key to distinguishing between different solutions before ∼50 Ma, e.g., using the g 4 − g 3 term. This term modulates the amplitude of eccentricity and, e.g., the interval between consecutive minima in a 2-Myr filter of eccentricity (Fig. 3). Other solutions such as La10c [33] also show a resonance transition around 50 Ma. However, the pattern for ZB18a is different prior to 55 Ma, which is critical for its better fit with the data record from 58-53 Ma (smaller RMSD, see Table 1, Fig. 1). For example, P 43 ≃ 2 and ∼1.6 Myr at ∼59 and ∼56 Ma in La10c but is rather stable at ∼1.5-1.6 Myr across this interval in ZB18a. Briefly, to explain the geologic record, our astronomical solution requires that the solar system is (a) chaotic and (b) underwent a specific resonance transition pattern between ∼60-50 Myr BP.
2019-08-31T13:04:04.774Z
2019-08-29T00:00:00.000
{ "year": 2019, "sha1": "38eea14d12efe4f83633350d67c10f84860e7e2e", "oa_license": null, "oa_url": "https://science.sciencemag.org/content/sci/365/6456/926.full.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "5c75267dafc91646e21d970bdee3b5c9f2a2f9ab", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Geology", "Physics", "Medicine" ] }
229499871
pes2o/s2orc
v3-fos-license
Visual Acuity and Size of Choroidal Neovascularization in Highly Myopic Eyes with a Dome-Shaped Macula Introduction A dome-shaped macula (DSM) is an inward convexity or anterior deviation of the macular area. DSM is believed as a protective factor in maintaining visual acuity in highly myopic eyes. Objective To investigate the correlation between best-corrected visual acuity (BCVA), choroidal neovascularization (CNV), and a dome-shaped macula (DSM) in highly myopic eyes. Methods In this retrospective and observational case series study, BCVA tests and optical coherence tomography (OCT) were performed in a total of 472 highly myopic eyes (refractive error ≥6.5 diopters or axial length ≥26.5 mm). CNV was detected by fundus fluorescein angiography (FFA), and the CNV area was measured by ImageJ software. BCVA, central retinal thickness (CRT), and the CNV area were compared between highly myopic eyes with and without DSM. Results The data revealed 13 eyes with DSM complicated by CNV, for an estimated prevalence of 25%. The eyes with CNV in the DSM group showed worse BCVA than those in the non-DSM group (1.59 ± 0.69 and 0.63 ± 0.64, respectively, p < 0.05), and the CNV area in the DSM group was larger than that in the non-DSM group (2793.91 ± 2181.24 and 1250.71 ± 1210.36 pixels, respectively, p < 0.05). After excluding the eyes with CNV, the DSM group had better BCVA than the non-DSM group (0.33 ± 0.17 and 0.44 ± 0.48, respectively, p < 0.05); however, no significant difference was observed in the CRT of eyes with CNV between the DSM group and the non-DSM group. Conclusion These results show that DSM might be a protective mechanism for visual acuity, but its protective capability is limited. DSM eyes have better visual acuity within the protective capability. If a more powerful pathogenic factor exceeding the protective capability is present, then the eye will have more severe CNV and worse visual acuity. Introduction A dome-shaped macula (DSM) is an inward convexity or anterior deviation of the macular area that was originally defined by Gaucher et al. in 2008, who described DSM as a new type of myopic posterior staphyloma using optical coherence tomography (OCT) [1]. With the development of 3D MRI, other researchers found that DSM develops earlier than staphyloma formation in some patients. Recent studies have indicated that DSM can be found in emmetropic eyes and might be an independent factor for lesions [2,3]. ere are still many uncertainties regarding the pathophysiology of DSM configuration. Several theories have been published, such as scleral infolding, choroidal thickening, and vitreomacular traction, but none has been confirmed. Choroidal neovascularization (CNV), retinoschisis (RS), and serous retinal detachment (SRD) are the main complications that occur in DSM eyes [1,[4][5][6][7][8][9]. Among these complications, CNV was the most frequent, present in more than one-third, and it is a vital factor that threatens the visual acuity of highly myopic patients [8,10]. inning of the sclera owing to long-term changes and elongation of the axis may develop CNV and other macular complications which could cause visual impairment in highly myopic eyes [11]. Prior reports have found an increase in the macular sclera and macular choroidal thickness in pathologic myopic eyes with DSM, and macular CNV was detected significantly more frequently in eyes without DSM than in eyes with DSM [12,13]. It seems that greater bulge height and thicker choroid in highly myopic eyes with DSM may protect against the development of myopic CNV [13], given that DSM may serve as a compensatory mechanism. In addition, although myopic macular retinoschisis was detected more frequently in highly myopic eyes with DSM, the foveal retinoschisis was less common in eyes with DSM [9,13]. us, it is believed that DSM was a protective factor in maintaining visual acuity in highly myopic eyes. In this study, we evaluated the visual acuity of patients with highly myopic eyes with or without DSM. e features of CNV and the central retinal thickness (CRT) in patients with highly myopic eyes with CNV were also investigated and analyzed by fundus fluorescein angiography (FFA) and optical coherence tomography (OCT), respectively, which may provide evidence to further understand this phenomenon. Materials and Methods In this retrospective observational study, the medical records of patients with highly myopic eyes from the Guangdong Province Traditional Chinese Medical Hospital and e Second People's Hospital of Foshan were reviewed. is study adhered to the principles of the Declaration of Helsinki. High myopia was defined as a refractive error ≥−6.5 D or an axial length of >26.5 mm. Eyes with an inferior staphyloma due to congenital tilted disc syndrome or with other vision-threatening pathologies such as corneal opacity, severe cataracts, age-related macular degeneration, and diabetic retinopathy were excluded from the study. DSM was defined as an inward bulge of the macular RPE >50 µm in the horizontal or vertical section of the OCT image [1]. All patients underwent a full ophthalmic examination including a best-corrected visual acuity (BCVA) test, tonometry, optometry, slip lamp examination, funduscopy, axial length (AL) measurements (IOL Master 500, version 7.7, Carl Zeiss AG, Dublin, California, USA), and OCT (3D OCT-2000, Topcon, Tokyo, Japan; Spectralis HRA + OCT, Heidelberg Engineering Inc., Germany). Vertical and horizontal line scans 6 mm in length and centered on the fovea were obtained from OCT. e height of the macular bulge compared with the bottom of the staphyloma was measured by tracing a line tangent to the outer border of the RPE at the bottom of the staphyloma. en, the distance between the RPE at the foveal center and the newly traced line was measured, representing the height of the bulge. Macular thickness was measured according to the Early Treatment of Diabetic Retinopathy Study (ETDRS) areas, which were defined by three concentric rings (central, inner, and outer circles) centered on the fovea, with diameters of 1, 3, and 6 mm. Other macular changes on OCT, such as intraretinal fluid (IRF), subretinal fluid (SRF), subretinal hyperreflective material (SHRM), myopic foveoschisis (MFS), the integrity of ellipsoid zone (EZ), and the type of CNV, were also recorded. Subjects with suspected CNV underwent FFA (TRC-50DX, Topcon, Tokyo, Japan; Spectralis HRA + OCT, Heidelberg Engineering Inc., German). e size of the CNV area was measured on FFA; areas with early hyper fluorescence and leakage were considered areas of CNV. All image processing and analyses were carried out using public domain software (ImageJ, v1.41d, available at http://rsb.info.nih.gov/ij). A single experienced examiner blinded to the clinical diagnoses of the subjects performed the OCT and FFA examinations. Statistical Analysis. All statistical analyses were performed with SPSS for Windows software (version 17.0, SPSS, Inc., Chicago, IL). All values are expressed as mean-± standard deviation. Pearson's chi-square tests were used to compare categorical variables. Student's t-test was used to explore differences in means among continuous variables, and the Mann-Whitney test was performed when the sample data were not normally distributed. erefore, the age, BCVA, size of CNV, and CRT were compared between the 2 groups using Student's t-test. e incidence of DSM in both sexes was compared between groups using chi-square tests. e IRF, SRF, SHRM, FS, type of CNV, and integrity of EZ were compared between groups using chi-square tests. A p value < 0.05 was considered statistically significant. Table 1). e average ages of patients with and without DSM were 64.7 ± 15.43 and 59.62 ± 15.21, respectively (t � 2.28, p > 0.05, Figure 1(a)). e other details are shown in Table 1. Characteristics of Highly Myopic Of the 472 myopic eyes, 40 eyes harbored CNV (8.5%): 13 eyes with DSM and 27 eyes without DSM. e BCVA of myopic eyes with and without DSM was 0.64 ± 0.66 and 0.44 ± 0.50, respectively, which indicates that the DSM group had overall worse visual acuity than the non-DSM group (t′ � 2.2, p < 0.05, Table 1 and Figure 1 Considering that CNV is a crucial complication that influences visual acuity, we further divided these 472 eyes into four subgroups: DSM eyes with CNV (13 eyes), non-DSM eyes with CNV (27 eyes), DSM eyes without CNV (39 eyes), and non-DSM eyes without CNV (393 eyes). e average BCVA in the four groups was 1.59 ± 0.69, 0.63 ± 0.64, 0.33 ± 0.17, and 0.44 ± 0.48, respectively (Table 1). e results showed that DSM eyes with CNV had poorer visual acuity than non-DSM eyes with CNV (t � 4.23, p < 0.05, Table 1 and Figure 1(c)). Once the eyes with CNV were excluded, the BCVA was better in eyes with DSM than in eyes without DSM (t′ � −2.96, p < 0.05, Table 1 and Figure 1(d)). Characteristics of Highly Myopic Eyes with CNV. We further analyzed 40 highly myopic eyes that had been diagnosed with CNV by OCT and FFA. Of all 40 eyes, 13 highly myopic eyes had DSM, and 27 eyes did not have DSM. e average age was not significantly different between the two groups (65.33 ± 17.53 and 64.28 ± 15.73, respectively, p > 0.05, Table 2). e general characteristics of the highly myopic eyes associated with CNV are presented in Table 2. Notably, BCVA was significantly worse in DSM eyes with CNV compared to non-DSM eyes with CNV (1.59 ± 0.69 and 0.63 ± 0.64, respectively, t′ � −2.96, p < 0.05, Table 1 and Figure 1(c)). Figure 2(h)). ese results indicated that the DSM group had larger CNV lesions than the non-DSM group; however, other potential factors affecting visual acuity, such as the average CRT, IRF, SRF, SHRM, FS, integrity of the EZ, and type of CNV, were not significantly different between the two groups ( Table 2). In patients with CNV, the logMAR BCVA is much higher in patients with DSM than non-DSM patients (c). In patients with no CNV lesions, the logMAR BCVA is much lower in patients with DSM than in patients without DSM (d). Discussion In our study, DSM was observed in 52 of the 472 highly myopic eyes (11.02%), which is a similar rate to those reported by Ohsugi [1,8,11,[14][15][16]. e present research showed that the DSM rate varies from 9.3% to 20.1%. e differences may be due to variations in the number of samples, differences in the inclusion criteria, and differences in the race of patients included in these studies. e association between DSM and visual acuity remains controversial [9,14,17,18]. DSM was initially believed to be a threat to visual acuity when first described by Gaucher et al. because of the association between DSM and several maculopathies, such as CNV, RS, SRD, foveal detachment, and RPE atrophy [1]. e incidence of maculopathy in DSM eyes is much higher than that in eyes without DSM [9,19,20]. is view was further confirmed by subsequent studies that focused on the morphological features of DSM, which found that DSM results from the relative thickening of the macular sclera and may lead to pigment epithelial detachment (PED). Long-term changes and elongation of the axis may thin the sclera, which leads to the development of CNV and causes visual impairment [11]; however, a larger-scale study suggested a different point. In a study involving 1118 highly myopic eyes, macular complications were more common in DSM eyes than in non-DSM eyes, and the most common complication was extrafoveal RS [9]. Foveal RS and CNV, two of the main threats to visual acuity, are much less common in DSM eyes than in non-DSM eyes [9,10,21]. Another study involving 1384 highly myopic eyes revealed that the incidence of foveal RS was also much lower in eyes with DSM than in those without DSM [16]. ese results indicate that DSM might be a protective mechanism for visual acuity. In this study, we found that the BCVA of patients in the non-DSM group was better compared to the DSM group. is result was confusing in view of the protective nature of DSM for visual function in highly myopic eyes. Given this situation, we compared the factors that may impair visual acuity including CRT, IRF, SRF, SHRM, FS, integrity of the EZ, type of CNV, and size of the CNV area between the two groups. e results showed that the CRT, IRF, SRF, SHRM, integrity of the EZ, and type of CNV were not significantly different between the two groups. e highly myopic eyes with DSM showed a larger CNV size than highly myopic eyes without DSM. Further analysis confirmed that the eyes with CNV in the DSM group showed worse BCVA than those in the non-DSM group. We hypothesize that patients in the DSM group had worse BCVA than patients in the non-DSM group due to the lower BCVA of eyes with CNV in the DSM than of eyes with CNV in the non-DSM group. After excluding eyes with CNV, the BCVA was better in the DSM group than in the non-DSM group. Additionally, as shown in the FFA results presented in Figure 2, the CNV area was larger in patients with CNV and DSM, but leakage and staining were also apparent. In patients with CNV without DSM, only leakage was observed. is result indicated the DSM patients have a longer duration of illness before asking a doctor for relief. In this case, we consider that DSM might be a protective mechanism for visual acuity, but its protective capability is limited. Less CNV and better visual acuity were found in DSM eyes within the protective capability. If a more powerful pathogenic factor exceeding the protective capability is present, then more severe CNV and worse visual acuity will occur. e development of CNV in DSM eyes indicates a powerful pathogenic factor, such as an elevated level of vascular endothelial growth factor (VEGF) and refractory macular diseases. e precise cause of DSM remains unknown; however, several theories ranging from localized choroidal thickening to scleral infolding to vitreomacular traction have been postulated to underlie the mechanism and pathophysiological processes that lead to the formation of DSM in highly myopic eyes [22]. Imamura et al. showed that DSM resulted from the relatively localized thickening of the sclera under the macula in highly myopic eyes with enhanced depth imaging OCT [5]. e thick sclera in DSM eyes may act as a macular buckle-like mechanism, thereby alleviating tractional forces over the fovea, preventing foveal RS or RD (retinal detachment), and protecting visual function [9]. Such anatomical features may minimize refractive errors and maintain emmetropization in highly myopic eyes. Among the vision-threatening complications that can occur in highly myopic eyes, CNV is a key factor that can lead to severe visual impairment [23]. Different views exist about the relationship between the incidence of macular CNV and the presence of DSM. CNV has been reported to be a frequent complication of DSM (12.2%, 20.8%, 25.0%, 41.2%, and 47.8%) in previous studies [10,24,25]; however, Liang et al. reported that the incidence of macular CNV was associated with age but not the presence of DSM. e mechanism of CNV formation is unclear [8]. Akyol et al. considered that CNV development may be related to choroidal and retinal blood flow changes [26]. Ohsugi et al. reported that CNV formation is caused by thinning of the central sclera owing to elongation of the AL [11]. Other risk factors for the formation of CNV have been reported, such as lacquer cracks, choroidal thinning, patchy atrophy, and the presence of a choroidal filling delay [27,28]. inning of the sclera owing to long-term changes and elongation of the axis may promote CNV development and cause visual impairment. us, in DSM eyes, a thick sclera may be a potential protective factor against CNV formation. e bulge height of DSM without the complication of CNV is significantly higher than that of DSM with the complication of CNV [11,29]. In the current study, CRT, IRF, SRF, SHRM, FS, integrity of the EZ, and the type of CNV did not significantly differ between myopic eyes with and without DSM. In view of this observation, these factors are not the main cause of the visual acuity differences between eyes with CNV in the DSM group and those in the non-DSM group. e eyes with CNV in the DSM group had a larger CNV area than those in the non-DSM group. In a previous study, those with a larger CNV area developed more chorioretinal atrophy (CRA) than those with a smaller CNV area, which may explain why low BCVA was observed in myopic eyes with CNV and DSM Journal of Ophthalmology [30]. As previously mentioned, the development of CNV in DSM eyes indicates the presence of a powerful pathogenic factor. e level of VEGF is the key factor in the formation of CNV. erefore, the VEGF level may be higher in DSM eyes with CNV than in those without. is point of view was confirmed in a previous study that found that patients without DSM might be more sensitive to intravitreal ranibizumab therapy in early stages compared to patients with DSM [31]. Our study has several limitations. First, this retrospective study was subject to the potential inherent limitations associated with this study design. Second, we included patients who visited the Guangdong Province Traditional Chinese Medical Hospital and e Second People's Hospital of Foshan. us, the results obtained may not exactly reflect the general myopic population. ird, all patients were Chinese, and we did not include patients of other ethnicities. Fourth, patients with visual symptoms were more likely to be enrolled in the study, which might have resulted in a high incidence of CNV. Finally, no precise method to calculate the CNV size has been established. FFA and OCT are common methods and showed high consistency with previous research [32]. In summary, DSM is a progressive anomaly of the posterior pole of myopic eyes. Our study shows that DSM may play an important role in protecting highly myopic eyes complicated by severe maculopathy and maintaining good visual acuity initially; however, this protection is limited. When neovessels appeared, the size of the CNV area increased, and visual acuity worsened. is work may provide more evidence in predicting the visual prognosis of patients with highly myopic eyes and may provide additional insight into the mechanism of pathologic myopia. ese conclusions indicated that the highly myopic patients with DSM need a more frequent follow-up to detect CNV in time. e DSM eyes with CNV might need a higher dose or more frequent intravitreal ranibizumab therapy in early stages than patients without DSM. Data Availability e data used to support the findings of this study are included within the article. Ethical Approval is retrospective observational study adhered to the principles of the Declaration of Helsinki. Conflicts of Interest e authors declare that they have no conflicts of interest. Authors' Contributions Lu Wang and Bin-wu Lin contributed equally to this work; Long Pang designed the study; Lu Wang, Bin-wu Lin, Xiaofang Yin, Wei-lan Huang, and Yi-zhi Wang performed data collection and management; Bin-wu Lin performed data analysis and interpretation; and Lu Wang and Bin-wu Lin wrote and reviewed the manuscript.
2020-11-26T09:07:26.944Z
2020-11-23T00:00:00.000
{ "year": 2020, "sha1": "e3f71f5b04ccfa54739162f3ae722c1552505a64", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/joph/2020/8852156.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9e1c7f7d5cfc6a3a9ac6f81234150c479d18dbd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263650756
pes2o/s2orc
v3-fos-license
Emotional Profiles of Facebook Pages: Audience Response to Political News in Hong Kong : As social media becomes a major channel of news access, emotions have emerged as a sig‑ nificant factor of news distribution. However, the influence of cultural differences on the relationship between emotions and news sharing remains understudied. This paper investigates the impact of cul‑ tural disparities on emotional responses to political news in Hong Kong. We introduce the notion of “emotional profile” to capture cultural differences in the level and structure of audiences’ emotional responses to political topics on Facebook news pages. The study was conducted at a highly signifi‑ cant political moment in the former British colony when the National Security Law (NSL) was passed. The study found that readers of China‑critical news pages on Facebook express the highest emotional intensity while readers of China’s media in Hong Kong express the lowest emotional intensity, and readers of China‑supporting media fall in between. Readers of China‑critical Facebook news pages express the most anger, but their political news sharing is correlated the most with “wow” and “sad” reactions. In contrast, readers of Facebook pages of China’s media in Hong Kong are more likely to react with “love”, which is also the emotion most associated with their political news sharing. The notion of “emotional profile” helps discover similarities within and differences across political bound‑ aries of the news ecosystem. We interpret the results with the help of recent scholarly understanding of emotional expression on social media within Hong Kong’s political context. Introduction Emotions are now widely recognized as one of the audience responses to news consumption.As social media becomes a major channel of news, audience's emotional responses have recently garnered attention due to their correlation with news sharing and the exploitation of emotions by politically extremist groups for disseminating disinformation (Ganesh 2020).However, studies of audience response metrics have found systematic cross-national and intra-national differences in the "culture of engagement" (Ferrer-Conill et al. 2021), implying that the level and types of emotion aroused by identical news topics or presentations may differ across cultures.We follow this lead and focus on cultural differences in the audience's emotional reactions to news within the news ecosystem of Hong Kong, a former British colony and now a Special Administrative Region of China.We compare the level and structure of emotional reaction of readers to political news published by outlets of different political positions.We also examine the relationship between various emotional reactions and the sharing of political news.To conduct the comparisons, we developed an original categorization scheme for news media in Hong Kong and employed computational analysis of Facebook data.Interpreting the results, we highlight the expressive, social, and instrumental aspects of emotional reactions in the digitally networked social space.The study contributes conceptually and methodologically by proposing the notion of "emotional profile" as one aspect of the "engagement profile" (Corner 2017), operationalizing the expressed acts of the emotional culture of engagement at the level of individual Facebook pages.It offers insight into the mechanism of networked news production on Facebook in an East Asian context by investigating the association of emotional reaction and political news propagation, at a politically significant moment of the place.It contributes to understanding the changing news ecosystem under Chinese rule by mapping a wide range of news outlets.The study adds to a growing body of literature focused on audience engagement with news in non-Western societies.In the rest of the paper, we review the literature on audience engagement with news on social media and on public emotions around politics on social media before putting forth our theoretical framework.We then describe the political news landscape in Hong Kong and introduce our categorization scheme for news providers.Finally, we explain the data, methods, and research hypotheses, before reporting and discussing our results. Audience Engagement on Social Media "Engagement" has recently become a buzzword in the news industry.It is an amorphous term that encompasses all dimensions of audience responses, incorporating "elements ranging from loyalty and attentiveness to behavioral response" (Napoli 2012, p. 86).Audience engagement with news can bring normatively positive consequences such as fostering political participation (Gil de Zúñiga et al. 2014) or negative consequences such as cyberbullying and the spreading of manipulative propaganda (Quandt 2018).However, economic incentives and the desire for social relevance drive news organizations to use engagement metrics as key indicators of journalistic performance (Moyo et al. 2019). Audience's Like, Emotional Reaction, and Share Metrics of the "small acts of engagement" on social media (Picone et al. 2019), commonly consisting of Like, Share, and Comment, do not always correlate (Kim and Yang 2017).Like is the most common audience response across cultures followed by comments and shares (Ferrer-Conill et al. 2021;Larsson 2018), but newsrooms consider likes the least important as their meaning remains ambiguous (Kim and Yang 2017;Larsson 2018).Gerlitz andHelmond (2013, p. 1358) argue that the like button of Facebook's social plugin on the web is "a one-click shortcut to express a variety of affective responses such as excitement, agreement, compassion, understanding, but also ironic and parodist liking".Shares, however, are considered the most important as they increase story views (Sehl et al. 2018). In February 2016, Facebook added the Reaction functionality.Hovering the cursor over the like button under a post, the reader will see several additional reaction emojis labeled "love", "haha", "wow", "sad", "angry", and, since April 2020, "care". Cultural Differences in Emotional Engagement with News on Facebook While the news industry's interest in audience engagement has prompted plentiful academic studies across different countries, the context dependency of engagement has caught relatively little attention.A prominent exception is Ferrer-Conill et al.'s (2021) study, which identified different levels of audience engagement as reflected in the like, share, and comment metrics across Nordic countries and between their state-owned and privately owned news media, which they called different "cultures of engagement".Other studies have found different levels of engagement across political lines within the same countries.Readers of alternative news, especially on the political right, reacted to, shared, and commented more than readers of mainstream news in the US and Norway (Hiaeshutter-Rice and Weeks 2021; Larsson 2019). Different types of audience emotions have also been expressed on Facebook news in different countries and on pages of different political positions."Angry" is the most frequent emotional reaction to Facebook news in France, whereas "love" is the highest reaction in the US and "haha" the most common in Germany (Tian et al. 2017).In Austria, many more angry reactions were found on far-right pages on Facebook than on pages of the social democrats (Eberl et al. 2020).In the US, left-and right-leaning hyper-partisan pages received a similar level of angry reactions, but left-leaning pages received much more love and laugh reactions (Sturm Wilkerson et al. 2021).However, the most likely audience reaction to hyper-partisan news in the US, whether left or right, is anger (Sturm Wilkerson et al. 2021).Emotional reaction is widely recognized as correlated to news sharing, but which specific emotions relate to news sharing varies across countries and political divides (Larsson 2018;Tian et al. 2017). Public Emotions around Politics in Networked Social Media For a long time, emotions have been disdained in journalism studies as being associated with sensational tabloid journalism, but recently their existence and significance in all phases of the news process have gained recognition in an "emotional turn in journalism studies" (Wahl-Jorgensen 2020), informed by developments in the sociology and psychology of emotions.One of these developments is the differentiation of emotions from related human states, aptly summed up by Eric Shouse (2005): "A feeling is a sensation that has been checked against previous experiences and labelled….An emotion is the projection/display of a feeling, …[which] can be either genuine or feigned….An affect is a non-conscious experience of intensity…".Another development is the identification of socially constructed properties in feelings and emotions.Individuals label their own feelings according to the "feeling rules" (Hochschild 1979) or an "emotion norm" (Thoits 1989) and decide what emotional reaction is appropriate according to the "display rules" (Hochschild 1979) or an "expression norm" (Thoits 1989).With this knowledge, "which emotions do gain purchase in the public sphere, why, and with what consequences" emerge as important research questions (Wahl-Jorgensen 2020, p. 177). Relevant to the "which" question, scholars have found that the characteristics of social media posts including the topic and language such as emotive or populist language correlate with the level and type of the audience's emotional reactions (Eberl et al. 2020;Jost et al. 2020;Sturm Wilkerson et al. 2021).However, the audience's emotional responses, in turn, impact the news produced in a feedback loop between news consumers and producers (Beckett 2015).This makes the cause and effect of audience reactions and the news texts less clear.On the other hand, the design of the social media platform helps to answer the "why" question.The social media metrics on social media platforms have been found to act as "popularity cues" (Haim et al. 2018) in providing "social navigation" (Lünich et al. 2012) to the audience's emotional reactions.The social process is expected to encourage a shared emotional interpretation of the news in the social network. The existence of a shared interpretation strategy is central to the notion of "interpretive community" used by Janice Radway ([1984Radway ([ ] 1991)).Separately, discursive interactions in digitally connected spaces among participants with political discontent underlie the notion of "affective public", conceived by Zizi Papacharissi (2014).Based on a study of Twitter users' collaborative news making via hashtags during political movements, and defining affect broadly to subsume feelings and emotions, affective publics are defined as "publics that actualize by feeling their way into politics through media" (Papacharissi 2014, p. 115).Research on community and public formation is relevant to the "consequence" question.However, although abundant research has been conducted on community formation in online spaces, they primarily examine the use of language exchanges, and far less is known about the role of emojis in community formation.Emojis have been found to have a strong impact on reader perceptions of the writer's commitment and personal mood (Ganster et al. 2012).Another study, conducted in online game environments, concluded that phatic emojis can help to create and/or reinforce community among players (Graham 2019, pp. 388-9).-Conill et al.'s (2021, p. 96) notion of "culture of engagement" refers to "differences in how readers engage with news posts depending on the country of origin and whether they are state-or privately owned outlets".We expand this conceptualization to encompass differences between media categories by political position, focusing on readers' emotional reactions.This extension and focus are justified as the studies reviewed above have found that readers of news of different political positions engage with news at different levels and express different types of emotions.Spotlighting the emotional aspect of cultures of engagement is consistent with research findings that emotional cultures operate at societal and group levels (Gordon 1989).Since human emotional response aligns with their underlying cognitive appraisal (Scherer 2005), we expect to find different emotional cultures of engagement in different media categories.Within the same media category, we expect minor differences between news providers due to their relatively unique engagement profile.Corner (2017, p. 2) suggested the term "engagement profile" to refer to "a variety of levels of engagement/involvement…generated across audiences who bother to attend at all, ranging from intensive commitment through to a cool willingness to be temporarily distracted right through finally to vigorous dislike" around a media product.Amid the wide range of audience responses, we limit our attention to the mix of emotional reactions to various political topics and operationalize it as the "emotional profile" of the Facebook page.In homage to Raymond William's notion of "structure of feelings" (Filmer 2003), we define the emotional profile of a Facebook public page as a composite measure that operationalizes the structure of feelings as emotional reactions expressed on a range of topics in the posts of the page over a period of time. Ferrer Informed by the literature review above, we consider that emotional reactions on Facebook are not a mere reflection of individual readers' emotions aroused by reading the posts.The count of reactions to each type of emotion to a post has at least two meanings: one, it is the aggregate of various readers' emotional expressions to the post in consideration of appropriateness in the social network, and two, it is a signal to future readers of the post which emotional reactions are appropriate for expressing.The total count of emotional reactions to a post on Facebook, on the other hand, signals both the popularity of the post and the number of readers in the interpretive network of the post who make their emotional expression visible. We see users' emotional reactions on a Facebook public page as social acts involving the public display of selected emotions associated with one's names in an interpretive network.In addition, the reactions could also express the users' feelings, or they could be enacted to achieve strategic goals.The interpretive network exists at the level of the Facebook public page and is activated from time to time around individual social media posts of the page, involving readers who interact with the particular post and with each other about the post.The interpretive network of a Facebook page also connects with those of other pages holding a similar political position through common followers of the pages and news sharing by readers of the pages.Unlike Radway's ([1984] 1991) concept of interpretive community, in online interpretive networks, we foresee the likely existence of a minority of users holding views that contest the majority interpretation, as has been found in networked framing studies of political issues (Meraz and Papacharissi 2013;Nip et al. 2020).In Papacharissi's (2014) conceptualization, affective publics are constructed through semantic discourses.However, we believe that users' emotional reactions on a Facebook page complement and support their commenting and sharing of news in building an affective public around the page and on networked pages. Political Landscape of News in Hong Kong Hong Kong's press freedom became an issue of concern following the 1984 agreement between Britain and China on sovereignty transfer.In 1997, the year of the political changeover, the Chinese-language news media in Hong Kong consisted of the mainstream (commercial) media, the de facto public broadcaster RTHK, and the local Party press. 1 Departing from the "centrist" model of journalism adopted in the 1970s (Chan and Lee 1989), the mainstream news media have become progressively influenced by-and supportive of-the Chinese authorities since the 1980s (Chan and Lee 1991;Frisch et al. 2018).In the past decade, China's influence in the Hong Kong media has accelerated as local Party newspapers and China's national news media both started online products to target Hong Kong, while appointees of China's political structure, office bearers of China's United Front organizations, 2 and individuals involved in China-supporting politics in Hong Kong have started other pro-China digital news media.A notable exception in the mainstream news media was the Apple Daily, launched in 1995, which was highly critical of the Chinese regime.From the early 2000s, online dissenting media have also appeared. The left-right division in Western societies cannot be directly mapped onto the politics in Hong Kong, where the degree of support for the Chinese Party-state is a better way to gauge the political position of the news providers.Under Chinese rule, opinion polarization in Hong Kong has intensified since 2003 (Chan and Fu 2017), when the first wave of large-scale street protests occurred.Supporters and opposers display different media consumption patterns, with protesters relying on dissenting news media and Facebook and anti-protesters consuming traditional media (Centre for Communication and Public Opinion Survey (CCPOS) ( 2020)).Given this, we expect news pages on each side of the political divide regarding China to have different emotional cultures of engagement. Media Sampling The study focused on Chinese-language news providers headquartered in or targeting Hong Kong. We started with lists of newspapers and broadcasters published by the Hong Kong government.Noting the absence of some widely known pro-China outlets on the lists, we included the Hong Kong service of China's national official media and searched for pro-China news outlets in local news reports extensively.For each outlet, we chose the Facebook page that published about society and politics. 3This resulted in 52 Facebook pages after excluding those that did not publish in the data period.We focused on the Facebook pages of the news providers because, at the time of data collection, online channels, and particularly Facebook, were the most used sources of news (85% and 58%, respectively) in Hong Kong (Chan et al. 2020). Media Categorization Academic studies and journalistic reports about the news media in Hong Kong commonly describe certain outlets as "pro-China" without identifying the correlates of such an ideological positioning.In view of the rapid growth of "pro-China" news outlets in recent years, we devised a categorization scheme to differentiate among them.The categorization considers two dimensions: (1) the location of the headquarters of the news outlet, and (2) the political-economic relationship between the news provider and the Chinese Party-state.Considering the first dimension, despite distinctions between Party/official and market/non-official media in mainland China (Stockmann 2013), all China-headquartered media in Hong Kong face equal restrictions when discussing national security issues under the Administrative Measures for Internet Information Services (State Council of the People's Republic of China 2020), distinguishing them from Hong Kong-based media outlets.For the second dimension, we analyzed the ownership of the news provider and the political awards/appointment and business connections of the owners/responsible personnel/majority shareholders in China.To do this, we corroborated information from Hong Kong's company registry, annual company reports, news reports, and academic research.This resulted in three categories: (1) China media, consisting of news providers headquartered in mainland China and those owned by them in Hong Kong; (2) China-supporting media, consisting of news providers headquartered in Hong Kong whose leadership is connected to China politically and/or economically; and (3) China-critical media, whose leadership does not bear identifiable political or economic connections with China (Table 1). Time Sampling, Data Collection, and Pre-Processing of Posts Triggered by the government-proposed Extradition Law Amendment Bill (ELAB), in 2019-2020, Hong Kong experienced the largest-ever anti-government protests in its history, which increased political polarization, with supporters of the establishment labeled "blue ribbon" and opposers "yellow ribbon".The events ended with China's imposition of the National Security Law (NSL) on Hong Kong in 2020, which has been widely criticized for its broad scope and ambiguity that could enable the repression of political opposition and free speech.Our study captures this significant historical moment.Through the API of CrowdTangle, a public insights tool owned and operated by Facebook, we collected all posts from the 52 pages between and including 21 May 2020 (the date when the Chinese authorities announced that an NSL would be passed for Hong Kong) and 31 July 2020 (the date when the Hong Kong Office on National Security newly established by the Chinese authorities first met with the Hong Kong government's Committee on National Security).We recorded the text (excluding visuals and videos) and the number of reactions, shares, comments, and likes for all posts. The raw text data contained advertisements as well as other forms of noise such as signatures or links to the page's other social media platforms.We used a mix of heuristic rules and regular expressions to clean the data.We then tokenized the text of the cleaned posts using the jieba library (Sun 2020), augmented with a Cantonese-specific dictionary (Shen et al. 2021), and filtered stop words.The data cleaning and pre-processing yielded a dataset of 89,896 posts containing, on average, 163 tokens (independent semantic units of one or more Chinese characters). Topic Modeling of Posts To identify the main topics present in the text of the news posts, we first applied a machine learning technique known as topic modeling to the dataset, using the non-negative matrix factorization (NMF) algorithm.The technique also enabled us to estimate the proportion of discourse devoted to each of the topics in each post.We opted for NMF over the more popular LDA (Latent Dirichlet Allocation) method because it has been shown to significantly outperform LDA with short texts (Si et al. 2022).We relied on the TC-W2C coherence measure, adapted to NMF (Greene and Cross 2017) to select the number of topics, and manually verified its appropriateness by iterating over neighboring topic numbers. The method produced seven different topics, which we labeled based on the most indicative keywords and the content of posts as: "NSL", "COVID-19", "International & China", "Police", "Hong Kong", "Confirmed COVID cases", and "Legislative affairs" (Table 2).We eliminated posts whose content did not contain a significant proportion of any of the seven topics by applying a threshold.This left us with 51,280 relevant posts.Out of the seven topics, we narrowed down the dataset to the political topics (NSL, police, and legislative affairs), leaving us with 24,652 posts, which we call political posts below. Emotional Analysis of Audience Considering the unclear status of like, we focused on the six newer reactions as indicators of the reader's emotions.These reactions have often been assumed to be good indicators of the audience's actual emotional attitudes (Tian et al. 2017).However, as discussed above, we do not analyze the metrics of emotional reactions as a measure of the readers' emotional states, but rather as emotional expressions within specific social contexts.Focusing on the audience's emotional reactions allows us to overcome several limitations of traditional textbased approaches, which often focus on the content or sentiment of the post rather than comments and fail to provide an understanding of the effect of the posts on the audience.When comments are analyzed, one must contend with the fact that users who comment are far fewer than those who merely react; using them as a primary data source thus allows us to capture reactions from a wider set of users.Furthermore, text-based sentiment analysis most commonly classifies texts on a crude negative-(neutral)-positive scale.This assumes a single way of reacting to a post and fails to consider the diversity of audience reactions.Sentiment analysis libraries, which may display good performance on their training dataset, may not perform well when applied to domains not covered by the original dataset or to ambiguous text.Indeed, as part of our preliminary data exploration, we used human coders to manually annotate the sentiment of a subsample of the posts on a positiveneutral-negative scale and found very low intercoder agreement, which would make any algorithm trained on data with such interpretational variability unreliable.Although emotion recognition algorithms offer a finer-grained look by classifying text into several different emotions, state-of-the-art algorithms offer far less reliable results than binary detection of positive/negative sentiment (Alswaidan and Menai 2020), especially for languages such as Cantonese.In contrast, using reactions-data provided by actual users-allows us to do away with the unreliability of sentiment and emotion detection algorithms. Profiling of Facebook Pages While abundant research has been conducted to correlate emotional reactions with national and subnational divisions, we seek to explore the utility of emotional reactions as an indicator of the political stance of the audience around a Facebook page.Based on the expressed emotional reactions, we created an indicator that we call "emotional profile".The emotional profile of a page considers the mix of content from the various topics identified in each of the posts as well as the type of audience emotional reaction to the post.We define it as the relative frequency distribution of the reactions of a page's readers to each of the topics present in its posts. For each page, topic modeling gives us a matrix A of dimension n × m, where n is the number of posts and m the number of topics, so that the value a i,j for i ∈ [1, n] and j ∈ [1, m] will be the normalized weight of topic j present in post i.We also have a matrix B of dimension n × 6, containing the number of each of the six reactions generated by every one of the n posts in our dataset.We first compute the product of A T and B to distribute the number of reactions to a post across the topics covered in it and aggregate the results across all posts.We thus obtain a 6 × m matrix C and compute row-wise percentages so that the value c i,j for i ∈ [1, 6] and j ∈ [1, m] gives us a representation of the proportion of reaction i to topic j as a percentage of all reactions to topic j.We flatten the matrix into a 6•m dimensional vector which constitutes the emotional profile of the page.We rely on proportions so that the profiles are relatively independent of the overall level of intensity of reactions and instead express the structure of emotional engagement. Research Hypotheses Given the above-discussed context, we expect that readers of China-critical media feel more intensely about political topics than readers of China-supporting media or China media. H1a. Facebook pages of China-critical HK media receive a higher level of emotional reactions than China-supporting HK media over political news. H1b. Facebook pages of China-supporting HK media receive a higher level of emotional reactions than China media in HK over political news. With the NSL implemented, we expect the China media and China-supporting media to tone down the antagonistic atmosphere in Hong Kong and encourage harmony, whereas readers of China-critical media would express high anger. H2a. Facebook pages of China media in HK evoke a different structure of emotional reactions from China-supporting HK media over political news. H2d. Facebook pages of China-supporting HK media evoke proportionally more anger among their audience than China media in HK over political news. Given the long history of protests in post-handover Hong Kong, we reckon that anger is a highly relevant emotion.Since the protests mobilized a far larger portion of the population than pro-government rallies, we expect the angry reaction moved readers of Chinacritical media to higher political participation in the form of sharing and commenting on political news. H3a. Angry reaction on Facebook pages of China-critical HK media is more strongly related to the audience's sharing of political news than on Facebook pages of China-supporting HK media. H3b. Angry reaction on Facebook pages of China-supporting Hong Kong media is more strongly related to the audience's sharing of political news than on Facebook pages of China media in Hong Kong. H4a. Angry reaction on Facebook pages of China-critical HK media is more strongly related to the audience's commenting on political news than on Facebook pages of China-supporting HK media. H4b. Angry reaction on Facebook pages of China-supporting Hong Kong media is more strongly related to the audience's commenting on political news than on Facebook pages of China media in Hong Kong. Analyses To provide a background for the comparisons, we analyzed: 1. The volume of all the posts published by each of the news pages and media categories; 2. The news agenda, measured by the proportion of the news topics above the set threshold in all the posts, of each of the news pages and media categories. To test the hypotheses, we compared: 1. The level and proportion of each of the emotional reactions made by readers to the political posts of each of the media categories and news pages; 2. The correlations between the level of each of the emotional reactions and the number of news shares as well as the number of comments among the political posts of the media categories. Volume of News Publishing News providers of the three media categories differed substantially in the number of posts they published in the 72 days of the data period: China-supporting HK media published the most-with an average of 1954.6 posts per page (27.1 posts per page per day)followed by China-critical HK media-1786.0 posts per page (24.8 posts per page per day), and then China media in HK-1144.5 posts per page (15.9 posts per page per day).However, statistical difference in the average number of posts per page is found only between Chinasupporting HK media and China media in HK at p = 0.1 due to the small number of pages in the categories. News Agenda The news agenda of the three media categories differed substantially.Despite the high significance of the NSL, the China media covered the three political topics (NSL, police, and legislative affairs) the least (13.1 percent), followed by China-supporting media (33.6 percent), both much lower than China-critical media (58.7 percent) (Figure 1).These differences in the news agenda are consistent with the expectations behind our hypotheses. Emotional Intensity China-critical media received a much higher average number of emotional reactions per political post (n = 1344.0)than China-supporting media (n = 308.6)or China media (n Intensity China-critical media received a much higher average number of emotional reactions per political post (n = 1344.0)than China-supporting media (n = 308.6)or China media (n = 38.0).This is partly because the two pages with the largest follower number, hk.nextmedia and standnewshk, were China-critical (Table 1).This is also because their readers were more active in reacting emotionally, as shown in the average number of emotional reactions per political post per follower.The differences between the three media categories apply to each of the six emotions in political posts (Figure 2).Pairwise comparisons using Tukey's HSD test indicate that all differences except two are significant at the 0.001 level. 4H1a and H1b are confirmed. Emotional Intensity China-critical media received a much higher average number of emotional reactions per political post (n = 1344.0)than China-supporting media (n = 308.6)or China media (n = 38.0).This is partly because the two pages with the largest follower number, hk.nextmedia and standnewshk, were China-critical (Table 1).This is also because their readers were more active in reacting emotionally, as shown in the average number of emotional reactions per political post per follower.The differences between the three media categories apply to each of the six emotions in political posts (Figure 2).Pairwise comparisons using Tukey's HSD test indicate that all differences except two are significant at the 0.001 level. 4H1a and H1b are confirmed. Proportion of Different Emotional Reactions Anger is the most prominent emotion expressed on political posts in all the three media categories, but its proportion is far lower in China media (33.0%) than in Chinasupporting media (57.2%) or China-critical media (62.9%) (Figure 3).China media evoke the highest proportion of love reactions (20.8%) compared to the other two media categories (Figure 3).Pairwise comparisons using Tukey's HSD test indicate that all differences except two are significant at the 0.001 level. 5Anger is the most prominent emotion expressed on political posts in all the three media categories, but its proportion is far lower in China media (33.0%) than in Chinasupporting media (57.2%) or China-critical media (62.9%) (Figure 3).China media evoke the highest proportion of love reactions (20.8%) compared to the other two media categories (Figure 3).Pairwise comparisons using Tukey's HSD test indicate that all differences except two are significant at the 0.001 level. 5The level of angry reactions to the NSL topic is significantly different (p < 0.05) between the three media categories, with China media being the lowest, higher for Chinasupporting media and the highest for China-critical media.However, on the legislative affairs topic, readers of China-supporting media were angrier than China-critical media (Table 3).In fact, the proportion of all the six emotions differs between all pairs of the three The level of angry reactions to the NSL topic is significantly different (p < 0.05) between the three media categories, with China media being the lowest, higher for Chinasupporting media and the highest for China-critical media.However, on the legislative affairs topic, readers of China-supporting media were angrier than China-critical media (Table 3).In fact, the proportion of all the six emotions differs between all pairs of the three media categories in each of the political topics, except in the care reaction in the NSL topic, the care, haha, and sad reactions in the police topic, and the wow reaction in the legislative affairs topic (Table 3). Emotional Profiles We trained a machine learning classifier to predict the category each page would be classified into based on their emotional profile, i.e., the relative proportions of different reactions to each different topic.We used the Random Forest algorithm with 10-fold cross validation, with a mean F-score of 80.5% (standard deviation 0.09).We isolated the 10 most salient features used by the train model to make a prediction.We then plotted, for each category, the average proportion of each feature among the pages included in the category (Figure 4).This allows us to focus on the aspects of the pages' emotional profile that were most likely to be representative of their category.We can then, in turn, visualize how each category differs from the others.The results validate our postulation that pages taking a different political position have significantly different cultures emotional engagement.China-critical media pages, for instance, are characterized by angry reactions to NSL-and police-related news.While those features are also characteristic of Chinasupporting media pages, the scale is comparatively lower.In contrast, angry reactions to the NSL are virtually irrelevant for China-media pages' where instead the love reaction to the NSL is highly represented.While also important, albeit to a lesser extent, among most likely to be representative of their category.We can then, in turn, visualize how each category differs from the others.The results validate our postulation that pages taking a different political position have significantly different cultures of emotional engagement.China-critical media pages, for instance, are characterized by angry reactions to NSL-and police-related news.While those features are also characteristic of China-supporting media pages, the scale is comparatively lower.In contrast, angry reactions to the NSL are virtually irrelevant for China-media pages' where instead the love reaction to the NSL is highly represented.While also important, albeit to a lesser extent, among China-supporting media pages, love reactions to NSL-related news are virtually absent from China-critical media pages.The above analyses of proportion of emotional reactions and emotional profiles together provide evidence for a different structure of emotional reactions over political topics between China media in HK and China-supporting HK media, and between Chinasupporting and China-critical media.H2a and H2b are supported.The anger expressed over political topics by China-critical media is higher than in China-supporting media, which, in turn, is higher than in China media.Hypotheses 2c and 2d are supported. Angry Reaction and News Share The political posts of China media and China-supporting media, in that order, drew fewer shares and comments than China-critical media (Table 4).Consistent with studies elsewhere, the total reaction count of political posts correlates significantly with the sharing of political posts, but the relationship is the strongest among China-critical media (Table 5).In both China-critical and China-supporting media, angry reaction to political posts is significantly associated with sharing of political posts, but the correlation is stronger among China-critical media; H3a is supported.Angry reaction to political posts in China media is not significantly related to sharing of political posts (Table 5); H3b is supported.The above analyses of proportion of emotional reactions and emotional profiles together provide evidence for a different structure of emotional reactions over political topics between China media in HK and China-supporting HK media, and between Chinasupporting and China-critical media.H2a and H2b are supported.The anger expressed over political topics by China-critical media is higher than in China-supporting media, which, in turn, is higher than in China media.Hypotheses 2c and 2d are supported. Angry Reaction and News Share The political posts of China media and China-supporting media, in that order, drew fewer shares and comments than China-critical media (Table 4).Consistent with studies elsewhere, the total reaction count of political posts correlates significantly with the sharing of political posts, but the relationship is the strongest among China-critical media (Table 5).In both China-critical and China-supporting media, angry reaction to political posts is significantly associated with sharing of political posts, but the correlation is stronger among China-critical media; H3a is supported.Angry reaction to political posts in China media is not significantly related to sharing of political posts (Table 5); H3b is supported.The emotions associated most strongly with political post sharing differ among the three categories: On China-critical media, it is wow (r = 0.9473, p < 0.0001) and sad (r = 0.9166, p < 0.0001); on China-supporting media, it is care (r = 0.7585, p < 0.0001) and love (r = 0.6826, p < 0.0001), which also apply to China media except in a different order (love r = 0.8489, p < 0.001; care r = 0.8013, p < 0.01) (Table 5). Where commenting on political news is concerned, the association with anger is again the strongest among China-critical media, more so than China-supporting media, whereas among China media the two are not related.H4a and H4b are supported.However, among China-critical media, it is love that is most strongly correlated with political news com-menting (r = 0.9030, p < 0.0001), but among China-supporting media, it is care (r = 0.8694, p < 0.0001), and among China media it is haha (r = 0.84189, p < 0.05) (Table 5). Discussion and Conclusions Relying on 52 public pages of Hong Kong news media on Facebook, which we group into three categories, we find that China media, and to a lesser extent, China-supporting media, reported far less political news than China-critical media on Facebook, although China-supporting media were as active as China-critical media in publishing news posts.The limited coverage of politics in news published by China and China-supporting media is consistent with the Chinese leadership's vision for Hong Kong as an economically driven city rather than a politically charged one. Comparing political news published by the three media categories, China media fetched the fewest emotional reactions, shares, and comments, while China-critical media fetched the most.As said above, sentiment analysis methods would not provide a reliable measure of the emotions embedded in the political posts, therefore making it impossible to check how they might correlate with the audience's emotional reactions.In such absence, the first author conducted a qualitative reading of a large subsample of the most reacted-to posts published by the three media categories but did not detect a noticeable difference in the amount of emotional content in the posts between the media categories.Since emotions relate to news sharing, the very different levels of emotional reactions towards political news published by the three media categories would mean that political news published by China-critical news providers is shared much more often than that of China-supporting media or China media.The differences are further underpinned by the larger numbers of followers of the top pages in the China-critical media category.However, in the news ecology of Hong Kong as a whole, the greater number of China-supporting news providers may compensate for the shortfall. The high level of anger found among pro-democracy China-critical media provides a contrast, under different political contexts, to studies in the US and Europe, where partisan right-wing media are the angriest.However, on pages of different political positions in Hong Kong, anger, the most common emotional reaction across the three media categories, is directed towards different targets, in alignment with the news provider's political position.For example, the post that drew the largest number of angry reactions in the Chinacritical media category is about a policeman stamping on a man's calf while arresting him.In contrast, a post about the Hong Kong Lawyers' Association condemning the physical attack of a lawyer who argued with rioters blocking the road solicited the most angry reactions on the China-supporting page speakouthk.These examples support the interpretation that the angry emotional reactions on the pages serve to express the users' feelings about the events reported in the news.A non-representative survey conducted in Hong Kong after the passing of the NSL found that anger was the most experienced emotion (80.5%) (Cheng et al. 2022).At the same time, making an angry emotional reaction can be a social act as a public display of defiance in a relatively safe space among people with a shared interpretation of politics, and which cannot be done elsewhere without substantial risks in Hong Kong's changed political environment.Further, angry emotional reactions can be a strategic act of political mobilization, as emotions have long been recognized as a motivation for people to join social movements (Jasper 1998).In this light, the differential levels of user engagement are indicators of the level of online activism associated with different political positions.Such differentials are consistent with the differential strengths in the offline mobilization of the China-critical versus the China/China-supporting camp, as evidenced in the scale and duration of protests and riots against the Hong Kong-China government in 2019-2020. We foresee that the frequent acts of emotional reactions reinforce the emotional bond between users on the Facebook pages, fortify their shared emotional interpretation, and strengthen their shared cognitive interpretation of political news.The shared emotional and cognitive orientations in the networks lay the cornerstone of a discursively constructed affective counter-public that contests hegemonic discourses disseminated by the China media and China-supporting media. This study makes the first attempt to compare the audience's emotional responses to political news in Hong Kong in the post-NSL period.We have found systematic differences in the level and structure of emotions expressed by audiences over political news among three media categories differentiated by their political-economic relationship with the Chinese Party-state.We also found different types of emotion correlated most strongly with audience sharing and commenting of political posts.The results provide evidence that the political position acts as a dimension of differentiation of emotional cultures of engagement on Facebook in the same news ecosystem.The correspondence between the results of classifying the emotional profiles of individual Facebook pages and the manual categorization of news outlets demonstrates the usefulness of the notion of "emotional profile" that we propose, as it enables comprehension and comparison of news products at the reception end, beyond description based on audience demographics or analysis at the production end based on news content. The news ecology on Facebook is changing rapidly.Since this study was conducted, six news providers included in the China-critical category of our sample, hk.nextmedia, nextmagazinefansclub, standnewshk, hkcnews, maddogdailyhk, and post852, have ceased operation under pressure of the NSL.In post-NSL Hong Kong, even sharing a public Facebook post can become liable to criminal prosecution (Cheng 2022); large numbers of public Facebook pages have closed (Creery 2020).These changes will undermine the political consequences of user engagement on Facebook.On the other hand, Facebook has demoted political content in the platform's algorithm since early 2021 (Horwitz et al. 2023).While such a move might help to mitigate polarization in the north American context, it is likely to further limit public expression and dissemination of China-critical political content in places such as Hong Kong. Figure 1 . Figure 1.Proportion of seven topics by media category. Figure 1 . Figure 1.Proportion of seven topics by media category. Figure 1 . Figure 1.Proportion of seven topics by media category. Figure 2 . Figure 2. Average number of emotional reactions per political post divided by follower count. Figure 2 . Figure 2. Average number of emotional reactions per political post divided by follower count.7.4.Structure of Emotion 7.4.1.Proportion of Different Emotional Reactions Figure 3 . Figure 3. Average proportion of emotional reactions to political posts by media category. Figure 3 . Figure 3. Average proportion of emotional reactions to political posts by media category. Figure 4 . Figure 4. Average (standard deviation indicated by the black line) proportion of each of the 10 most salient features among the categories. Figure 4 . Figure 4. Average (standard deviation indicated by the black line) proportion of each of the 10 most salient features among the categories. Table 1 . Categorization of sampled news providers in HK. * The follower numbers were dated 1 July 2020 in the Asia/Shanghai (CST) time zone, manually collected from Intelligence on CrowdTangle on 14 July 2021, except for four pages (dotdotnews, hk.nextmedia, nextmagazinefansclub, and post852), which had closed at the time and for which we relied on estimates based on data from third-party sources. Table 2 . The seven topics generated from topic modeling and their 10 most representative words. Table 3 . Average proportion of emotional reactions to political topics in political posts by media category. * indicates a significant pairwise difference with China media (p < 0.05), † a significant difference with Chinasupporting media, and § a significant difference with China-critical media. Table 4 . Average share and comment count by media category. Table 5 . Pearson's correlation coefficients of reaction count and share/comment count in political posts.
2023-10-05T15:32:51.329Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "130d67283ca0360b7bfc09c7879448b68658054f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-5172/4/4/65/pdf?version=1696069140", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c1f1dfc7f2f61b911cdd3183b44a1cfc270cf3e", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
248416283
pes2o/s2orc
v3-fos-license
Whether Anterolateral Single Rod Can Maintain the Surgical Outcomes Following Oblique Lumbar Interbody Fusion for Double‐Segment Disc Disease Objective To evaluate the outcomes of oblique lumbar interbody fusion (OLIF) combined with anterolateral single‐rod screw fixation (AF) in treating two‐segment lumbar degenerative disc disease (LDDD) and to determine whether AF can maintain the surgical results. Methods A retrospective analysis was performed on patients who underwent OLIF combined with AF (OLIF‐AF) for LDDD at the L3‐5 levels between October 2017 and May 2018. A total of 84 patients, including 44 males and 40 females, with a mean age of 62.8 ± 6.8 years, who completed the 12‐month follow‐up were eventually enrolled. Clinical outcomes, including the Oswestry Disability Index (ODI), visual analog scale (VAS) score for the low back and leg, and radiographic parameters, including the cross‐sectional area (CSA) of the spinal canal, disc height (DH), foraminal height (FH), degree of upper vertebral slippage (DUVS), segmental lumbar lordosis (SL), fusion rate, and lumbar lordosis (LL), were recorded before surgery and 1 and 12 months after surgery. Surgical‐related complications, including cage subsidence (CS), were also evaluated. The local radiographic parameters were compared between L3‐4 and L4‐5. The clinical results and all radiographic parameters were compared between patients with and without CS. Results Significant improvements were observed in radiographic parameters 1 day postoperatively (p < 0.05). Local radiological parameters in L4‐5 had a significant decrease at 12 months postoperatively (p < 0.05), while they were well‐maintained at L3‐4 throughout the follow‐up period (p > 0.05). CS was observed in 26 segments (15.5%). Endplate injury was observed in four segments (2.4%). There was no significant difference in the fusion rate between the segments with and without CS (p = 0.355). The clinical results improved significantly after surgery (p < 0.05), and no significant difference was observed between the groups with and without CS (p > 0.05). Conclusions Anterolateral fixation combined with OLIF provides sufficient stability to sustain most radiological improvements in treating double‐segment LDDD. Subsidence was the most common complication, which was prone to occur in L4‐5 compared to L3‐4, but did not impede the fusion process or diminish the surgical results. Introduction O ver the past few years, the lumbar interbody fusion (LIF) technique has continued to evolve, with the aim of reducing operative complications and improving surgical achievements. As an indirect decompression technique, extreme lateral interbody fusion (LLIF) has been increasingly performed, with the advantages of minimal invasiveness, less blood loss, and shorter operative times than the conventional LIF technique via the posterior (PLIF) or transforaminal (TLIF) approach. However, the LLIF technique was reported to be associated with the risk of injuring the lumbar plexus due to its direct lateral approach that passes through the psoas major muscle. 1 In an attempt to decrease the complications related to a transpsoas approach, oblique lateral lumbar interbody fusion (OLIF) was introduced in 2012 by Silvestre et al. 2 The minimally invasive OLIF procedure provides psoas-preserving access to the index lumbar disc via an anterior oblique retroperitoneal approach between the aorta and major psoas muscle. A large sample study pointed out that the risks of sensory nerve injury and psoas weakness were significantly reduced following OLIF compared with LLIF, which was mainly due to the adoption of a more optimized surgical approach. 3 By virtue of the advantage of its surgical approach, a large amount of bone graft material can be implanted by inserting a relatively large cage, and at the same time, a larger and more consummate prepared endplate can be provided as a bone graft bed, eventually enabling achievement of a higher rate of intervertebral fusion following OLIF compared to other traditional LIF techniques. 4 Supplemental fixation is routinely employed to maximize the stability of spine-instrumentation construction to promote intervertebral fusion. 5 Traditionally, posterior bilateral pedicle screw fixation (PF) was considered to provide the most sufficient biomechanical support and the maximum restriction for intervertebral movement at the index construction, thus having been widely recommended previously, especially for cases requiring surgical instrumentation for two or more levels. 5 However, the invasion of posterior spinal elements for pedicle fixation, the increased surgical duration, anesthesia-related accidents, and the risk of cage migration due to intraoperative repositioning should not be neglected. 6 Therefore, a convenient instrumentation choice should be taken into consideration. An anterolateral singlerod screw fixation (AF) system could be assembled through the same incision following cage implantation, which was believed to shorten the surgical duration and mitigate the aforementioned complications caused by repositioning. AF has been biomechanically proven to significantly reduce the range of motion of the operated spinal segment in all directions after single level thoracolumbar surgery, with no significant difference in restrictions on lateral bending compared to PF, 7 and thus increasingly being used as an alternative to posterior fixation in treating single-segment degenerative lumbar disc disease (LDDD). In this context, Guo et al. 8 compared the surgical results of OLIF combined with AF (OLIF-AF) and PF (OLIF-PF) in treating single-segment LDDD. The author found that there were comparable results in terms of clinical symptom relief and radiological achievements between the two techniques. And what's more, there were advantages of shortening operation time and anesthesia time, reducing blood loss and fluoroscopy time following OLIF-AF compared to OLIF-PF. Similarly, a previous study also showed that AF well maintains postoperative radiographic achievements after OLIF for single-segment instrumentation. 9 Two-segment LDDD is a common setting of degenerative spinal disease, with a rate of 11.6% in a large population survey. 10 Patients with two-segment LDDD tend to be older. 10 With the merits of being minimally invasive and requiring shorter duration of surgery and anesthesia, OLIF-AF seems to be a better choice for patients with two-segment LDDD. However, to the best of our knowledge, there have been few reports on the use of OLIF-AF for the treatment of two-segment LDDD. The purpose of this study was to investigate the feasibility and effectiveness of the OLIF-AF technique for two-segment LDDD by retrospectively analyzing the clinical outcomes, radiological achievements, and complications of patients who underwent OLIF-AF for LDDD at L3-5 in our institution to evaluate the unique advantages of anterolateral fixation combined with OLIF technology and promote the indication extension and technique improvement of OLIF-AF surgery. Inclusion and Exclusion Criteria The inclusion criteria were as follows: (i) patients with chronic lumbago-leg pain who were unresponsive to conservative treatment for at least 3 months; (ii) OLIF-AF was performed for two-segment LDDD at L3-5; and (iii) patients who had a follow-up of more than 12 months. We excluded patients who were diagnosed with severe stenosis (Schizas grade C or D) or stenosis caused by extruded herniated discs, calcified discs or bony spur formation and who were diagnosed with isthmic spondylolisthesis or severe degenerative spondylolisthesis (Meyerding grade II-IV). Patients who underwent additional endoscopic discectomy were also excluded. This study was performed retrospectively on 84 patients, including 44 males and 40 females, with a mean age of 62.8 AE 6.8 years, who underwent pure OLIF-AF at L3-5 between October 2017 and May 2018, and was approved by the ethics committees of West China Hospital (no. 2020-554). The requirement for informed consent was waived because of the study's retrospective nature. Surgical Procedure Patients were placed in the right decubitus position. The external oblique, internal oblique, and transverse abdominal muscles were separated bluntly. Subsequently, discectomy was performed at L4-5, and then an appropriate PEEK cage (height: 8-14 mm, length: 45-55 mm, width: 18 mm, lordosis: 6 ) loaded with CPC rhBMP-2 was inserted into the disc space. Then, these steps were performed again at L3-4. Finally, three Solera screws were inserted parallelly into the L3, L4, and L5 vertebrae and then connected by a single rod with an appropriate length. Radiographic and Clinical Evaluation Serial radiographs were measured preoperatively and 1 day and 12 months postoperatively. Clinical results were recorded before and 1 and 12 months after surgery. Cross-Sectional Area (CSA). The CSA of the spinal canal was used to evaluate the decompression of the intraspinal structure. It was measured at the operative level using T2-weighted MRI. A single axial slice through the center of the operative disc was used as the comparative measure location in axial views for measurement of the CSA. The outline of the thecal sac in the selected axial view was traced manually, and the area enclosed (mm 2 ) was measured. 11 Disc Height (DH) and Foraminal Height (FH). DH and FH were used to evaluate the restoration of the decreased intervertebral disc and neural foramen space. 9 These parameters are usually measured using 3D-CT. DH was defined as the vertical distance between the midpoint of the upper endplate and the lower endplate on the midsagittal plane. FH was measured as the maximum distance between the inferior margin of the upper pedicle and the superior margin of the lower pedicle. Lumbar Lordosis (LL) and Segmental Lordosis (SL). LL and SL are usually used to evaluate the alignment of the spinal column in the sagittal plane and are believed to be associated with long-term postoperative outcomes. These were measured using standing lateral X-ray. LL was defined as the angle between the vertical line of the upper endplate of L1 and S1, and SL was measured as the angle between the lower and upper endplates of the operated level ( Figure 1). Cage Subsidence (CS). CS manifests as cages sinking into vertebrae through adjacent endplates prior to complete fusion and is regarded as a complication. It was measured as the amount of DH reduction after surgery on 3D-CT and classified into four grades. Grade 0, 0%-24%; Grade I, 25%-49%; Grade II, 50%-74%; and Grade III, 75%-100%. Grades 0 and I were considered as low-grade subsidence, while Grades II and III were considered as high-grade subsidence. 12 Visual Analog Scale (VAS) Score and the Oswestry Disability Index (ODI). The VAS was designed to assess the pain of the lower back and leg. The score ranges from 0 (no pain) to 10 points (most pain). The ODI is currently the most widely used scale for evaluating postoperative functional recovery after lumbar surgery. The score ranges from 0 (no disability) to 100 points (most disability). Fusion was defined as the presence of bridging trabecular bone. Endplate injury was recorded in the case of discontinuities of cortical bone in the endplate on CT view. Statistical Analysis SPSS 22.0 (IBM Corp., Armonk, New York, USA) was used for the statistical analysis. All measurements are presented as the means AE standard deviations. The variance of continuous numerical variables was compared statistically using one-way repeated measures analysis of variance, and the variance of categorical variables was compared statistically using the chi-square test. P < 0.05 was considered statistically significant. Follow-up All patients were followed up in the outpatient department or by telephone with a standard questionnaire survey. The mean follow-up time was 15.0 AE 1.8 months (range from 12 to 18 months) ( Table 1). The content of follow-up included the clinical results (ODI, VAS score), radiological changes (CSA, DH, FH, DUVS, SL, LL), and complications. General Results A total of 84 patients, including 40 males and 44 females, with an average age of 62.8 AE 6.8 years, were included. The average disease history and body mass index (BMI) were 4.5 AE 0.6 years and 21.9 AE 2.7, respectively. The mean surgical duration, bleeding volume, and hospitalization were 172.6 AE 8.9 min (range: 159-192 min), 67.8 AE 10.5 ml (range: 50-89 ml), and 5.7 AE 0.7 days (range: 5-7 days), respectively. Radiography Improvement The DH and the right and left FH at L3-4 (p < 0.001, t = 17.10, 5.66, 6.17) and L4-5 (p < 0.001, t = 16.19, 5.12, 6.21) significantly increased at 1 day postoperatively, and the improvement was comparable between the two levels (p > 0.05). Only slight loss in these parameters was observed at L3-4 at 12 months postoperatively (p > 0.05). In contrast, Note: Data presented as mean AE standard deviation. Numbers in parentheses represent t values; Abbreviations: Pre-, preoperative; post-, postoperative; * p < 0.05, compared to the pre-; ** p < 0.05, compared to the 1 month post-. Compared with the preoperative values, the CAS of L3-4 and L4-5 significantly increased at 1 day postoperatively (p < 0.001, t = 11.50, 11.81) and only slightly decreased during the 12-month follow-up (p > 0.05). No significant difference in the changes was observed between the two levels (p > 0.05). Similar trends were observed in the DUVS. Complications No major vessel injuries or nerve root injuries occurred during the operations. Two patients had levoscoliosis at the index intervertebral level, which made cage entry difficult even after adjustment of the operating table, and were identified as intraoperative endplate injuries. A total of four patients (four segments) incurred endplate injury intraoperatively. They were asked to remain in bed for 4-6 weeks before being able to leave their bed under the protection of a lumbar brace, and CS was eventually identified in two of them. In the evaluation at 12 months postoperatively, cage subsidence was observed in 18 patients (26 segments). Only one patient who with CS at both the L3-4 and L4-5 levels had a second operation to relieve recurrent symptoms. Discussion Radiological Outcomes of OLIF-AF for Two-Segment LDDD Two-segment LDDD tends to be associated with more severe spinal stenosis and greater sagittal malalignment with a lack of LL due to the loss of DH, 13 suggesting that adequately enlarging the spinal canal and correcting sagittal malalignment should be the surgical management objectives. With these objectives, we chose to implant a large cage with a lordotic angle of 6 using the OLIF approach and observed 45.3% enlargement in CAS and 11.1 improvement in LL on average. When comparing the results between L3-4 and L4-5, there were no significant differences in the improvement of local radiological parameters except for a significantly greater improvement in lordosis at L4-5 than L3-4, which was believed to be related to the fact that the anterior margin of the psoas major muscle is more anterior at L4-5 than L3-4, resulting in cage placement more anterior at the L4-5 level to further promote lordosis. 14 Better Clinical Outcomes of OLIF-AF for Double-Segment LDDD To obviate the need for repositioning in the traditional OLIF procedure, Blizzard et al. 6 reported implanting posterior screws percutaneously with the patient in a lateral decubitus position. However, the rate of revision surgery due to pedicle breach was increased. In our study, we implanted the implants through a lateral abdominal incision with patients taking the lateral position, and the surgical duration and bleeding volume were 172.6 AE 8.9 min and 67.8 AE 10.5 ml, respectively, which were significantly lower than the values of 217.4 AE 92.1 min and 240.6 AE 153.8 ml reported by Can Zhang et al. 15 using OLIF with posterior fixation to treat LDDD. A relatively shortened surgical duration and less bleeding were noted, and no anterolateral instrumentationrelated complications were observed in our study, indicating that anterolateral instrumentation not only saved time by obviating intraoperative repositioning but also avoided interference with spinal posterior elements. Good Maintenance of OLIF-AF Outcomes Notably, postoperative loss of spinal lordosis causes greater residual low back pain 16 ; therefore, whether single-rod screw fixation provides sufficient additional stability to maintain radiographic achievements must be discussed. Fogel et al. 5 compared the biomechanical stability of anterolateral platescrew fixation to that of bilateral posterior pedicle screw fixation at a single-level discectomy and concluded that the former could significantly reduce the range of motion of the operated level in all directions and with no significant difference in lateral bending between the two. Lowe et al. 17 compared the biomechanical stability of single-and double-rod screw fixation in the thoracolumbar region and found that single-rod screw fixation combined with intervertebral support could provide sufficient stability for a person of average size and normal bone quality. A previous clinical study drew similar conclusions after single-level discectomy. [8][9] In terms of the application in two-level discectomy, we found that local radiological parameters were well-maintained at L3-4 and only significantly decreased at SL, DH, and FH at L4-5 during the follow-up. Meanwhile, all parameters showed significant improvement compared to those before surgery. Therefore, we thought anterolateral single-rod screw fixation is reliable in treating two-segment LDDD. Complications of OLIF-AF The reported complication rates of traditional OLIF are approximately 11.2%-32.2%. [14][15]18 Theoretically, OLIF-AF does not cause additional complications compared to traditional OLIF. In our study, the overall complication rate was 21.4%. The incidence of endplate injuries was 4.8% (4/84 segments), which was believed to be related to poor bone condition 19 or lumbar scoliosis. Therefore, antiosteoporosis and appropriate patient positioning are helpful to avoid endplate injury. CS was the most common complication in the current study and was observed in 26 segments (15.5%) in 18 patients (21.4%), including Grade 0 in 18 segments, Grade I in eight segments, and Grade II or III in no segments. For the 18 patients with CS, the mean bone mineral density (BMD) and BMI were À 2.5 AE 0.3 and 26.1 AE 1.1, respectively; moreover, four segments were accompanied by endplate injuries. Therefore, we speculated that avoiding bony endplate injury, treating osteoporosis, and controlling body weight may be beneficial to prevent CS. 20 Interestingly, we found that the incidence of CS at L4-5 was significantly higher than that at L3-4 (7.1% vs 23.8%, p = 0.003), which may be attributed to the fact that the L4 vertebra has higher endplate compression strength than the L5 vertebra, 21 and also explains the greater loss of DH, FH, and SL at L4-5 than at L3-4 ( Figure 3). This phenomenon of differential CS requires us to focus more on L4-5, and we hypothesized that the load tolerance of the L5 endplate may be enhanced by tricortical screw insertion or even with bone cement-reinforced screws to prevent CS. 22 CS represents the progression of cage sinking prior to complete incorporation of the fusion mass; in this process, microinstability of the operated level will gradually develop, which may affect the fusion process. 23 Several articles have discussed the impact of CS on fusion and final clinical outcomes, but the conclusion remains controversial. 17,24,25 Choi et al. 24 reported that CS did not result in lower fusion rates, while Satake et al. 19 reported that CS caused a lower fusion rate but did not affect clinical outcomes. Jiya et al. 25 reported that a higher rate of CS is most likely related to poor clinical outcomes. In our study, we did not detect a significant difference in the incidence of fusion between the patients with low-grade CS and no CS. In terms of clinical outcomes, although patients with low-grade CS exhibited statistically poorer clinical results than those without CS 1 month postoperatively, no significant difference was observed between the two groups 12 months postoperatively; thus, we primarily concluded that low-grade CS does not impede the fusion process or compromise clinical outcomes. Limitations T he present study had some limitations. This retrospective study lacks a direct comparison with OLIF combined with pedicle screw fixation or stand-alone or conventional fusion surgery such as TLIF, which undermines the presentation of the trait of anterolateral fixation and should be studied in the future. In addition, our findings were based on a small sample with short follow-up. In the future, a longer follow-up investigation will be conducted for a larger cohort. Conclusion O LIF is an effective procedure for achieving indirect decompression and sagittal alignment repair in the treatment of two-segment LDDD at L3-5. AF construction could provide sufficient stability to sustain most achievements. Low-grade CS is the most common complication, which is likely to occur at L4-5 compared with L3-4 but does not impede the fusion process or compromise clinical outcomes.
2022-04-29T06:23:05.913Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "719eaa5c4e55591d9a26d1f2c777bfe4360d1542", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.13290", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd3e39c8184c2680e7bed150086cb30c155f0336", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
225684562
pes2o/s2orc
v3-fos-license
“A STUDY ON AHARAJA AND VIHARAJA HETUS IN THE ETIOPATHOGENESIS OF VISWACHI TO CERVICAL RADICULOPATHY” health Cervical Radiculopathy(radix=root) or Cervical spondylotic radiculopathy shows compression of a nerveroot which occurs when a disc prolapses laterally which is due to osteophytic encroachment of the intervertebral foramina presenting the features of neck pain that may radiate in the distribution of the affected nerve root can be paralleled with viswachi. 897 Cervical Radiculopathy(radix=root) or Cervical spondylotic radiculopathy shows compression of a nerveroot which occurs when a disc prolapses laterally which is due to osteophytic encroachment of the intervertebral foramina presenting the features of neck pain that may radiate in the distribution of the affected nerve root can be paralleled with viswachi. It is universally accepted that radicular symptoms in the arm usually indicate nerve root entrapment secondary to a paracervical disc protusion or in the older population to the foraminal bony hypertrophy. Nerve root pain may be very distressing and is often compatible with manual or office work for a variable period of time, depending upon the pathology. The present study of Viswachi is limited to the cervical spine lesions. The degenerative diseases of the cervical spine, Cervical radiculopathy is clinically correlated with Viswachi of vatavyadhi. It is commonly seen in old age, but in present scenario also seen in young and middle aged people. The annual incidence of cervical radicular symptoms to be 83.2per 100000 populations and its prevalence is most significant between 50-54 yr age group. It is most prevalent among the farmers and labor class who lift heavy objects, push or pull heavy objects, operate vibrating equipments, some occupational odd postures, such as tailors, drivers, daily wage workers etc who involve in strenuous activities. And today as a result of modernization the most common trend we witness in this busy world is : people often going for long drives in vehicles, working for long hours in front of computers, night outs in call centersetc, ultimately resulting with early or late victims of Viswachi(Cervical radiculopathy), one of the commonest cause of neck pain. Materials and Methods:- The present study entitled "A STUDY ON THE AHARAJA AND VIHARAJA HETUS IN THE ETIOPATHOGENESIS OF VISWACHI W.S.R TO CERVICAL RADICULOPATHY" was carried out with the following methodology. Duration of the study: Since this is an observational study, patients were kept under observation till the clinical and radiological evaluation is done. Follow up: Study did not require follow up as this is an observational study. Investigations: X-Ray, Cervical spine AP and Lateral view. In the present study, maximum no. of patients above the age group of 36 were recorded and among them most of the patients i.e. 40% of the age group between 36-50yrs had Viswachi. This is because of prolonged strenuous work during their 2 nd and 3 rd decade might be the reason. 2. Sex: In the present study there were 35 male patients i.e. 70% and 15 female patient's i.e.30%. This is because of involvement of men in more strenuous physical activities and due to some occupational postures. 3. Religion: 50% of the patients were Hindu's, 30% were Muslim's, and 20% were Christian's. Predominance of Hindu religion in and around is reflected in this sample. The high incidence of illness in Hindus reflects the prevalence of causation of the disease. As this may be the representation of the community distribution in and around Bengaluru city. 4. Education: In the present study it was observed that most of the people had education up to primary school and graduates. However education has minimum role on the pattern of disease. 5. Socio economic status: In the present study 48% were from middle class family, 46% were from lower class family, and 6% were from upper class. This data suggests that, lifestyle of middle class people either in the form of heavy work or influence of their profession in causing the particular disease. 6. Occupation and nature of work: While considering the nature of occupation, it was observed that maximum i.e. 40% were labourers, 20% were house wives, and 30% were professionals. Heavy manual works and improper postures may lead to sthanika vataprakopa pain greeva and amsapradesha resulting in viswachi. Cervical radiculopathy is considered as an occupational hazard as most of the professionals become its victim due to their improper working pattern which has its effect on cervical spine. 7. Marital status: In present study 70% were married and 30% were unmarried. This is because the incidence occurs as an occupational hazard in middle and old age. 8. Duration of complaints: Maximum number of patients i.e. 40% had complaints more than >5yrs, 30% were <1yr and 30% were>5yrs. It indicates the chronicity of the disease. 9. Aggravating factor: In the present study 40% had weight lifting as an aggravating factor and 20% had bathing. Results:- Heavy manual works and improper postures may lead to sthanika vataprakopa pain greeva and amsapradesha resulting in viswachi. 10. Family history: 96% of patients didn't have any family history. Thus we can conclude that the role of family history is very minimal in manifestation of viswachi. 11. Desha: Majority of the patients i.e. 40% of them belonged to sadharanadesha and the maximum no. of patients visiting the hospital are from surrounding locality. Hence it is difficult to draw any conclusion out of it. Dashavidha Pariksha 1. Prakriti: It was assessed based on the major physical, psychological and behavioural features of the patient. In the present study it was observed that majority of the patients had vata-kaphaja prakriti i.e.46% and also vatapittaja prakriti i.e.40%. As the prakupita vata effect will be more seen in these patients who have dominance of vata in them. 2. Sara: Vishudhhatara dhatu is called sara (Ca.Vi.8/102-115). The obtained data supports the view mentioned in classics that madhyama and avara sara people are prone to diseases. In this study, 90% of the patients belonged to madhyama sara and 10% had avara sara. 902 3. Samhanana: Data pertaining to samhanana reveals that patients of viswachi were associated with medium physical constitution. In this study, 92% of them had madhyama samhanana which is due to continuous dhatukshaya causing vatavriddhi. 4. Satva: The psychological factors play a chief role in causing progression of discomfort as we know that chinta, shoka, bhaya etc manasikakarana leads to vataprakopa. Acharya Charaka has stated the people with madhayama and avara satva are prone more to diseases. The analysis of satva revealed that 70% patient's had madhyama satva. 5. Vyayama Shakti: In this present study, 52% patients had avara vyayama shakti and 36% had madhyama vyayama shakti. This clearly mentions that the pain and discomfort due to viswachi invariably decreases vyayama shakti in an individual. Vishamashana Among all the 50 patients, 80% of them did vishamashana. "Aprapta atitakalam tu bhuktam vishamashanam iti" refers to untimely and delayed consumption, should be considered which leads to vitiation of agni resulting into formation of ama. And it is explained that ama is one of the major factors in pathogenesis of many diseases and viswachi is one among them. Adhyasana Among all the 50 patients, 40% of them did adhyasana. "Bhuktasyoparibhuktam adhyasana " Taking food before the digestion of previous food decreases the secretion of digestive enzymes and disturbs digestion of food and produces ama, which in turn is a major causative factor in pathogenesis of many diseases and viswachi is one among them. 903 Vishtambhi ahara Among all, 40% of patients consumed vishtambhi ahara like chapati, ragi ball, masura, mudga, adhaki and other varieties belonging to this group. These dravya possess kashaya and madhura rasa, katu vipaka, shita and laghu guna. Hence these vishtambhi foods lead to vibandha which is a cause for vata prakopa and in turn is a cause for manifestation of viswachi. Truna dhanya In this study among 50 patients, trunadhanya such as ragi was consumed by 70% patients and jowar by 30% patients. Ragi is kashaya and tikta rasa predominant, is laghu and shita virya where all its properties cause vata prakopa and similarly jowar is kashaya rasa predominant and shita virya aiding for vata prakopa which in turn is cause for manifestation of disease viswachi. Ruksha ahara In this study among 50 patients, ruksha ahara such as chapatti, jowar roti, ragi ball was consumed by 90% of patients. These foods are ruksha guna predominant. Ruksha guna is responsible for shoshana, katinatva, and rukshana actions. Ruksha guna is mainly related to vatadosha. It subsides kapha and aggravates vata in turn is a cause for manifestation of diseases like viswachi and other vatavikara. Laghu guna ahara In this study among 50 patients, laghu ahara such as pongal, white rice, salad (cabbage, onion, tomato) was consumed by 100% of patients. These food items are laghu in guna and it does kaphashamana and vata vardhana and hence cause vataprakopa in turn causing manifestation of viswachi. Kashaya dravya In this among 50 patients, kashayadravyas such as unripe banana, okra/bhindi, chick peas were consumed by 100% patients. Kashayarasa is kaphapittahara and vatakara. It is having properties like ruksha, sheeta, and laghu which are also shared by vata. Excessive consumption of sheetaguna leads to obstruction of srotas, hinders the movement of vataetc. The gunas like khara, vishada and ruksha produces diseases like viswachi and other vata vikara. Katu dravya In this study among 50 patients, katu dravyas such as chilly, onion, garlic, ginger, black pepper were consumed by 100% patients. Katu rasa dravya has vayu and agni mahabhuta dominance. It has laghu and rukshaguna. It causes toda and bheda in the region of charana (feet), bhuja(shoulders), parshwa(flanks), prushta(back) and causes diseases of vata, among which one is viswachi. Sheeta ahara In this study among 50 patients, sheeta ahara such as ice-cream, cold-drinks, fruits(apple, grapes,melon,tender coconut) was consumed by 70% patients. Excessive consumption of sheeta guna leads to obstruction of srotas, hinders the movement of vata etc.The gunas like khara, vishada and ruksha produces diseases like viswachi and other vata vikara. 904 Here all the above mentioned ahara dravyas quantity and the no. of times the person is consuming the ahara dravyais taken into account as it was consumed for maximum no. of times when compared to other foods on a daily basis and in large quantity per serving. Hence the person who does not follow ashta aharavidhi, dwadasha ashana pravicharas are more prone for the vitiation of vatadosha. Therefore the type of diet, its quality, and method of preparation, taste and potency, post digestive effects of the diet, time, season, and mental state during intake of food should be taken into account. Dukha asana and dukhashayya i.e due to improper sitting and sleeping posture there will be obstruction in the pathway of vatadosha leading to its vitiation. In the present era this etiological factor has important significance as the professionals are in a habit of working for long hours and taking rest in improper postures. Due to vegadharana and vegaudeerana there will be vilomagati of vatadosha leading to various dreadful vata vikaras. Nowadays in this busy lifestyle people have habit of suppressing their natural urges which reflects in various pathological conditions. Prime vitiation occurs to the vatadosha. Ratrijagarana contributes rukshaguna of vata and brings about vataprakopa, hence causing manifestation of viswachi or other vatavikara. Nowadays in most of the profession night shifts are common and also people also watch media during late night without considering the right time to sleep. All these factors should be taken into account. Ativyayama-"shareera aayasajanakam karmam vyayamam uchyate". Excessive vyayama i.e more than half extent of one's own capacity leads to shosha and vataprakopa leading to shoola. From today's point of view, we can consider weight lifters, swimming professionals, potters, daily wage laborers, market vendors, sweepers, cleaners etc; where finally all these individuals end up with the vitiation of vatadosha. Ativyavaya-due to excessive indulgence in sex without following the rules explained in maithunavidhi leads to dhatukshaya and vataprakopa. Ratha ati charyadue to excessive vehicle riding, vataprakopa happens in neck & shoulder region further leading to sthanasamshraya of doshas in bahu leading to viswachi vyadhi. All the above mentioned vihara's lead to vataprakopa, hence cause manifestation of viswachi. 905 Acknowledgement:-I take this opportunity with pleasure to thank all the people who have helped me throughout the course towards producing this work. My deepest sense of gratitude to my HOD, Guide, Principal, Staff, my colleagues, & last but most importantly my dear family for supporting me throught the study period. Medicinal research cannot be carried out without the enthusiastic attitude and due patience of the patient. I sincerely thank all my patients who kindly allowed me to carry this research work on themselves. Last but not the least I thank all those who have helped me directly and indirectly in successful completion of this work.
2020-07-16T09:08:56.106Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "ce910a4f2373c889bf717335eb2b46942020ecb9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21474/ijar01/11169", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c00fcbb617194c14b03105b18064035799f61d61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
265449022
pes2o/s2orc
v3-fos-license
Ecological disturbance reduces genomic diversity across an Alpine whitefish adaptive radiation Abstract Genomic diversity is associated with the adaptive potential of a population and thereby impacts the extinction risk of a species during environmental change. However, empirical data on genomic diversity of populations before environmental perturbations are rare and hence our understanding of the impact of perturbation on diversity is often limited. We here assess genomic diversity utilising whole‐genome resequencing data from all four species of the Lake Constance Alpine whitefish radiation. Our data covers a period of strong but transient anthropogenic environmental change and permits us to track changes in genomic diversity in all species over time. Genomic diversity became strongly reduced during the period of anthropogenic disturbance and has not recovered yet. The decrease in genomic diversity varies between 18% and 30%, depending on the species. Interspecific allele frequency differences of SNPs located in potentially ecologically relevant genes were homogenized over time. This suggests that in addition to the reduction of genome‐wide genetic variation, the differentiation that evolved in the process of adaptation to alternative ecologies between species might have been lost during the ecological disturbance. The erosion of substantial amounts of genomic variation within just a few generations in combination with the loss of potentially adaptive genomic differentiation, both of which had evolved over thousands of years, demonstrates the sensitivity of biodiversity in evolutionary young adaptive radiations towards environmental disturbance. Natural history collections, such as the one used for this study, are instrumental in the assessment of genomic consequences of anthropogenic environmental change. Historical samples enable us to document biodiversity loss against the shifting baseline syndrome and advance our understanding of the need for efficient biodiversity conservation on a global scale. | INTRODUC TI ON Genetic diversity represents the most fundamental level of biodiversity.Genomic diversity is central to sustain viable populations and to preserve evolutionary potential, enabling the adaptation to changing environmental conditions (Hoffmann et al., 2017;Willi et al., 2022).As a consequence, genomic diversity is one key component determining the extinction risk of a population during environmental change (Jensen & Leigh, 2022).Disturbance of ecosystems can influence not only genomic variation through both selective but also demographic (and selectively neutral) processes, as well as the interaction of both (Banks et al., 2013). As a result, the history of environmental disturbance may be a major driver shaping patterns and dynamics of genomic diversity in many natural systems (Banks et al., 2013).As both the frequency and strength of anthropogenic ecological disturbances are increasing (IPBES, 2019;Turner, 2010), it is essential to advance our understanding of how such disturbance affects biodiversity at its most basal level, which is genetic and/or genomic diversity (Banks et al., 2013). Anthropogenic eutrophication during the last century had dramatic consequences on the biodiversity of many perialpine lakes in Switzerland (Feulner & Seehausen, 2019;Frei, De-Kayne, et al., 2022;Vonlanthen et al., 2012).The effects on many species of the Alpine whitefish were particularly detrimental.In total, about a third of the more than 30 taxonomically described whitefish species went extinct during the period of anthropogenic eutrophication (Selz et al., 2020;Steinmann, 1950;Vonlanthen et al., 2012). High-nutrient inputs altered many habitat characteristics of the deep and oligotrophic Swiss lakes, affecting both diet and reproduction of many whitefish species (Vonlanthen et al., 2012).The loss of suitable, well-oxygenated spawning grounds together with the shift in food resources resulted in the extinction of multiple species through a combination of demographic decline and speciation reversal through introgressive hybridization (Frei, De-Kayne, et al., 2022;Vonlanthen et al., 2012).The improvement of sewage treatment and phosphorus management towards the end of the last century resulted in many of the Swiss lakes returning close to their natural oligotrophic state (Vonlanthen et al., 2012).Even though the changed environmental conditions were transient and of relatively short duration, the period of cultural eutrophication had severe consequences on the genomic variation of the Alpine whitefish radiation (Frei, De-Kayne, et al., 2022). In evolutionary young adaptive radiations, such as the Alpine whitefish radiation, sympatric species are still able to hybridize (Schluter, 2000;Seehausen et al., 2008).This is because complete reproductive isolation takes orders of magnitudes longer to evolve than the rapid speciation events in such young radiations (Schluter, 2009;Seehausen et al., 2008).The ability to exchange genomic variation might become particularly important during ecological disturbance: When environmental conditions rapidly change into an unfavorable state for a certain species of a young adaptive radiation, habitats can be lost and food resources might become unavailable, resulting in demographic decline.The decreasing population size is strengthening genetic drift, reducing genetic diversity in the declining population.In such a situation, the exchange of genomic variation with other members of the adaptive radiation through hybridization could become beneficial (Frei, Reichlin, et al., 2022;Grant & Grant, 2019).Hybridization might increase genomic variation of the population and enhance its evolvability, increasing the likelihood of adaptation to the changed environmental conditions through evolutionary rescue (Gilman & Behm, 2011).In order to document the effects of ecological disturbance on genomic variation following natural disturbance, genomic time-series data capturing the disturbance event are essential (Jensen & Leigh, 2022). The Lake Constance whitefish radiation was strongly affected by anthropogenic eutrophication during the last century.Using historical fish scale samples, previous work demonstrated that all four taxonomically described whitefish species of Lake Constance (Coregonus gutturosus, C. arenicolus, C. macrophthalmus, C. wartmanni) extensively hybridized during the eutrophication period (Frei, De-Kayne, et al., 2022;Vonlanthen et al., 2012).One species (C.gutturosus) went extinct by a combination of demographic decline and speciation reversal through introgressive hybridization during the period of anthropogenic eutrophication (Frei, De-Kayne, et al., 2022;Vonlanthen et al., 2012), and according to fisheries management, population sizes of all Lake Constance whitefish dramatically decreased over the last decades (Alexander & Seehausen, 2021).The potential to generate temporal whole-genome resequencing data spanning the entire eutrophication event and including four species makes the Lake Constance whitefish radiation an outstanding system to study the effects of ecological disturbance on genomic diversity. Previous work focused on speciation reversal and the genomic consequences of hybridization between all species within the Lake Constance Alpine whitefish radiation (Frei, De-Kayne, et al., 2022).Here we evaluate changes of genomic diversity within species over time.We assess if and to which extent intraspecies genomic diversity declined besides hybridization between species.Furthermore, while previous work highlighted the potential of adaptative introgression of alleles from an extinct species into contemporary species (Frei, De-Kayne, et al., 2022;Frei, Reichlin, et al., 2022), we here explore genomic patterns consistence with a loss of potentially functionally relevant variation over time.To do so, we used natural history collections to sequence population scale data (5-12 whole genomes per population) of each of the four Lake Constance whitefish species before the onset of the anthropogenic eutrophication (before 1950), as well as data from all three extant species during the peak eutrophication period (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980).In combination with existing sequencing data from the three surviving species (N = 6-13) collected after the eutrophication period ended and the lake returned to an oligotrophic state, we produced a time-series data set capturing the whole period (pre, during and post) of anthropogenic eutrophication.During this period of anthropogenic ecological disturbance, we observed a strong decline in genomic diversity over time and found genomic signals of population declines in all species.By establishing a baseline of genomic diversity before the occurrence of an anthropogenic disturbance, our work also demonstrated the value of natural history collection for biodiversity research. | Sample collection Historical whitefish scale samples previously used in Vonlanthen et al. (2012) and Frei, De-Kayne, et al. (2022) were used to extract DNA from 12 individuals of each population (pre-and during-eutrophication) of each species (see Table S1).These samples were collected from fisheries authorities around the lake during the last century and have been assembled by David Bittner (see Vonlanthen et al., 2012 for details).For the post-eutrophication populations, we used sequencing data (sampled 2015) produced by Frei, De-Kayne, et al. (2022) retrieved from ENA with accession PRJEB43605, as well as data from Frei, Reichlin, et al. (2022) (sampled 2019) retrieved from ENA with accession PRJEB53050 (see Table S1 for the 112 sample accessions). | DNA extraction and sequencing DNA was extracted according to Frei, De-Kayne, et al. (2022). In brief, DNA extraction of historical scale samples was done using the Qiagen DNeasy blood and tissue kit (Qiagen AG, CH). For scale samples, we followed the manufacturer's protocol for crude lysates with minor adjustments (alternative lysis buffer from Wasko et al., 2003 containing 4M UREA and overnight incubation at 37°C). Libraries were produced using the Accel-NGS 1S Plus DNA library kit (Swift Biosciences) at the NGS platform of the University of Bern.Libraries were then sequenced paired-end 100 bp on a Novaseq 6000 S4 flowcell. | Population genomic analysis Genotype likelihoods at polymorphic sites were calculated using angsd 0.925 (Korneliussen et al., 2014), using the samtools genotype likelihood model.For that purpose, we excluded reads with a mapping quality below 30, bases with base qualities below 20 and reads that did not map uniquely to the reference.Only sites passing a pvalue cut-off of 10E-6 for being variable, with a sequencing depth above 2× in each individual and with data of at least 80 of all 112 individuals were included.Additionally, we only analysed whitefish chromosomes without any potentially collapsed duplicated regions (De-Kayne et al., 2020) to avoid potential bias due to imbalanced ploidy levels (also note that there is no evidence for a heterogametic sex chromosome in Alpine whitefish).We applied SNP filters to avoid strand bias (-sb_pval 0.05), quality score bias (-qscore_pval 0.05), edge bias (-edge_pval 0.05) and mapping quality bias (-mapq_ pval 0.05).This resulted in a total of 355,311 polymorphic sites for further analysis. To verify the species assignment done in the field when these samples have been collected by fisheries authorities, we performed a PCA using PCAngsd 1.02 (Meisner & Albrechtsen, 2018). We excluded sites with a minor allele frequency below 0.05 (across the whole dataset), resulting in 128,164 sites.Default parameters were used, except for using the first three eigenvectors to estimate individual allele frequencies (-e 3).By this PCA approach, S1). Based on the genotype likelihoods inferred in the remaining 100 samples (excluding 12 individuals potentially misidentified in the field), we calculated Watterson's theta (θ ω ) and Tajima's D in 100 kb windows along the genome (Korneliussen et al., 2013).To do this, the folded site allele frequency likelihood for each species and each sampling timepoint separately (N = 5-13; see Table S2) was calculated in angsd (0.925) (Korneliussen et al., 2014;Nielsen et al., 2012).The maximum likelihood estimate of the folded site allele frequency spectrum was inferred using realSFS of angsd (0.925) (Korneliussen et al., 2014).With the global site allele frequency spectrum, we calculated different theta estimators and Tajima's D in 100 kb windows using thetaStat of angsd (0.925) (Korneliussen et al., 2013(Korneliussen et al., , 2014)).We used all 100 kb windows to calculate a genome wide average. We used NgsRelate v2 (Hanghøj et al., 2019;Korneliussen & Moltke, 2015) to calculate pairwise relatedness between all individuals of each species at each sampled timepoint based on genotype likelihoods.We split the genotype likelihood file generated across all species and timepoints into each single species and timepoints, and used these separate genotype likelihood fields as input to NgsRelate v2 (Hanghøj et al., 2019;Korneliussen & Moltke, 2015).At each polymorphic site in the genotype likelihood file, we calculated the allele frequency in each species at each timepoint in angsd 0.925 (Korneliussen et al., 2014) using the method from Kim et al. (2011), and also used this allele frequency information as input to NgsRelate v2 (Hanghøj et al., 2019;Korneliussen & Moltke, 2015), which we then used to calculate pairwise relatedness with default parameters.We finally calculated the mean relatedness in each species and timepoint by averaging across all pairwise relatedness values for each population in R (R Core Team, 2018). For each of the 355,311 polymorphic sites, we calculated the weighted F ST between each species and all other species pooled together (of only the pre-eutrophication populations) in angsd 0.925 (Korneliussen et al., 2014) from one-and two-dimensional site frequency spectra which were inferred from site allele frequencies (Nielsen et al., 2012).We plotted F ST values across the genome and identified the sites with the highest resulting F ST values that are most characteristic for the respective species, and thus, might be involved in the adaptation to its habitat.At the sites with the highest F ST , we then calculated the allele frequency in each species and at each timepoint in angsd 0.925 (Korneliussen et al., 2014) after the method from Kim et al. (2011) to track the change in allele frequency differences over time.To represent sites most characteristic for the respective species, we selected either those sites with the top 10 highest F ST values, the top 50 highest F ST values, or all sites in the tail of the F ST value distribution (using an empirical p-value cutoff of 0.001). Additionally, we calculated the allele frequencies in each species and at each timepoint for the SNP (position 30197713 on scaffold 23) within the gene edar that has been found to be significantly associated with gill-raker count (De-Kayne et al., 2022), a trait that is relevant for the feeding ecology of each species.We further blasted the protein sequence of the gene vgll3, which is known to be relevant for age at maturity in Salmo salar (Barson et al., 2015), against the Alpine whitefish genome and found two equivalent best hits.For any SNPs in these two genes (likely paralogous copies of vgll3), we as well calculated the allele frequencies in each species and each timepoint to document the change in allele frequencies over time. | RE SULTS We used natural history collections to sequence population genomic time series data, including an entire adaptive radiation and capturing a period of transient but severe ecological disturbance with the aim of documenting the influence of ecological disturbance on the genomic diversity of each single species, but also on the entire adaptive radiation (Figure 1). | Population structure We performed a PCA based on genotype likelihoods of 128,164 SNPs to visualize population structure of the Lake Constance whitefish radiation over time, respectively, over the period of anthropogenic eutrophication (Figure 1e).Apart from the extinction of C. gutturosus, the three extant species qualitatively cluster closer together post-eutrophication compared to pre-eutrophication, suggesting that the species are today less differentiated than before the onset of eutrophication.This might be the consequence of interspecific hybridization during the eutrophication period as demonstrated in previous work (Frei, De-Kayne, et al., 2022;Frei, Reichlin, et al., 2022;Vonlanthen et al., 2012), also consistent with several potential early generation hybrids in the during-eutrophication sampling time-point. | Nucleotide diversity Irrespective of species, nucleotide diversity (measured as Watterson's theta) declined over time (Figure 1b).In each species, nucleotide diversity was highest before the onset of anthropogenic eutrophication, and it was lowest post-eutrophication, while the populations sampled during peak eutrophication indicated values of nucleotide diversity between the pre-and post-eutrophication population of the species.In total, C. wartmanni lost ~23%, C. arenicolus lost ~28% and C. macrophthalmus lost ~30% of their original nucleotide diversity from before the onset of the eutrophication period. | Tajima's D The genome-wide average of Tajima's D of each species was negative before and during the period of anthropogenic eutrophication (Figure 1c), indicative of population expansion after a recent bottleneck.This might reflect the recent colonization and evolution of the radiation within Lake Constance, since the last glacial maximum 10,000-15,000 years ago.However, in each species, Tajima's D was positive (D > 0; Figure 1c) after the period of anthropogenic eutrophication ended, potentially indicating a sudden population contraction associated with the altered environmental conditions. | Mean relatedness For each extant species of Lake Constance whitefish, we calculated the pairwise relatedness between all individual before, during and after the period of anthropogenic eutrophication (Figure S2). | Frequency shifts over time We identified the most characteristic alleles of each species by calculating the F ST between each species and all other species pooled into one population at all SNPs along the genome (of only the preeutrophication populations).For each pairwise comparison the 50 positions with the highest F ST values were largely spread across different chromosomes (Figure 2).In line with previous work that identified few fixed differences between sympatric species of the Alpine whitefish radiation (De-Kayne et al., 2022), we did not detect any fixed differences (F ST >0.95).In all three extant species, we observed almost identical patterns of frequency trajectories at all of the 50 most characteristic sites: Allele frequency differences between species were homogenized over the period of eutrophication, because the frequency of the predominant allele in the focal species decreased, while its frequency in all other species increased over time (Figure 3).We observe qualitatively similar patterns when the top 10 most differentiated sites (Figure S3) or 356 sites in the tail of the distribution (Figure S4) were selected as most characteristic sites. We also assessed the allele frequency change in two genes which affect ecologically relevant phenotypes in Salmonids.At a locus in the edar gene involved in determining the gill-raker count of whitefish species from De-Kayne et al. ( 2022), the pre-eutrophication samples of the species with a low-gill-raker count indicated very high frequencies of the non-reference allele (C.gutturosus 0.99 and C. arenicolus 0.66) while the species with a higher gill-raker count (both C. wartmanni and C. macrophthalmus) were fixed for the reference allele (Figure 4a).After the eutrophication period ended, the differences between the surviving species became smaller (C.macrophthalmus increased from 0 to 0.18, C. wartmanni increased from 0.12, and C. arenicolus decreased from 0.66 to 0.43).We detected six polymorphic loci within two vgll3 paralogs, a gene that is known to be involved in the age at maturity in S. salar (Barson et al., 2015;Czorlich et al., 2018).For five of these six loci, the allele frequencies varied only little between species and were very low (minor allele frequency below 0.15 across all species and time points).However, one SNP indicated a pattern where frequencies were differentiated between the species before eutrophication, but differentiation was completely lost after eutrophication (Figure 4b). | DISCUSS ION diversity is a core component of the adaptive potential of a population.As a result, the maintenance of genetic diversity is fundamental for fast adaptive responses to rapid environmental change.Hence, genetic diversity is a key component of the extinction risk of species during environmental change (Jensen & Leigh, 2022) and used as a metric to monitor threatened populations (Hoban et al., 2022).Anthropogenic ecological disturbance has the potential to decrease genetic diversity in natural populations ( Sánchez-Barreiro et al., 2021;Themudo et al., 2020).Here, we generated population level whole-genome resequencing data of all extant species of an adaptive radiation before, during and after a severe but transient period of anthropogenic eutrophication.We tracked genomic diversity through time and found that genomic diversity was reduced after the period of eutrophication.Based on our results, we discuss the implications of reduced genetic diversity for the adaptive potential and extinction risk, as well as the relevance of such data for management and conservation. | Population-decline during anthropogenic eutrophication We observed substantial declines of genetic diversity in each species of the radiation over time, presumably in response to anthropogenic ecological disturbance of the ecosystem.In parallel, we observed a shift in Tajima's D from negative to positive in all species.Such change in genome-wide Tajima's D suggests that rare alleles were lost over time, a pattern expected in declining populations.Furthermore, also in line with a decreasing population size, relatedness within each species increased over the period of eutrophication.Our estimates of relatedness likely are biased due to the limited sample size of the different species at specific timepoints (N = 5-13).Small sample sizes like in this study are expected to underestimate relatedness and bias might be more or less proportional to sample size (Wang, 2017).Hence, while absolute values are biased, trends through time might still provide relevant insights. Taken together these results suggest that the period of anthropogenic eutrophication resulted in demographic decline of all three species, probably due to habitat loss and shifts in the available prey community (see Vonlanthen et al., 2012).Such a reduction in population sizes might potentially have strengthened genetic drift, and eventually resulted in the loss of substantial amounts of genomic variation in a very short period of time (~30 years or ~6 whitefish generations (Nussle et al., 2009)).It has been theoretically predicted that when genomic diversity declines linearly, population sizes might still be relatively large (Jensen & Leigh, 2022). When population sizes are declining fast, many rare alleles are lost rapidly, leading to strong initial loss of genomic diversity.After this initial loss of rare alleles, all remaining alleles become common and thus, it becomes harder to lose further variation via drift, slowing down the decline of genomic diversity.Hence, when populations sizes become small, the loss of genomic diversity is L-shaped or exponential (Jensen & Leigh, 2022).We here observed rather linear declines of genomic diversity in all three studied species over time, in addition to a shift in genome-wide Tajima's D consistent with a loss of rare alleles.This suggests that despite the population sizes declined during the period of eutrophication, each species might have still retained a relatively large effective population size. Estimates of pairwise nucleotide diversity θ ω of 3.8 × 10 −5 before eutrophication imply an effective population size N e of about 947 (N e = θ ω /4μ; assuming a mutation rate of 10 | Evolutionary rescue enabled by hybridization Previous work showed that all the Lake Constance whitefish species extensively hybridized during the period of anthropogenic eutrophication, while there was little to no selection against genomic variation derived from hybridization (Frei, De-Kayne, et al., 2022). Even though hybridization might not have been able to counteract the loss of genomic diversity caused by the population declines during eutrophication, it could have helped to maintain diversity in each species and to overcome the negative effects of reduced genomic variation.Through such a scenario of evolutionary rescue, interspecific hybridization might become adaptive when populations rapidly decline (Stelkens et al., 2014;Vedder et al., 2022).This suggests that when environments rapidly change, the risk of extinction of species that have evolved complete reproductive isolation against all other sympatric species might be higher than that of species that are still able to hybridize with one or several other species.Hence, the evolution of complete reproductive isolation of a species can become obstructive for the survival of the species under rapid environmental change. | Frequency differences homogenized in all species We identified the 50 historically most characteristic SNPs of each species (50 highest F ST values between the pre-eutrophication population of each species and the pre-eutrophication populations of all other species pooled together).These positions have been chosen to reflect some of the adaptation to each species' habitat that have evolved over the course of evolution (da Silva Riberio et al., 2022). When we compared the allele frequencies at these sites before, during and after eutrophication, we find that the frequency differences become homogenized over the period of eutrophication.Often, the frequency of the predominant allele in the focal species decreased, while its frequency in all other species increased over time, suggesting that the homogenization of allele frequency differences happened in all species.Two ecologically relevant genes, the edar locus associated with gill-raker count from De-Kayne et al. (2022) and paralogs of vgll3 which is involved in the determination of age at maturity in Atlantic salmon (Barson et al., 2015;Czorlich et al., 2018), indicated the same pattern of homogenized interspecific allele frequency differences over the period of anthropogenic eutrophication.The loss of frequency differences at ecologically relevant loci is in line with phenotypic data showing reduced ranges of gill-raker numbers after eutrophication (Vonlanthen et al., 2012). Furthermore, the homogenization of allele frequency differences at ecologically relevant alleles is consistent with extensive hybridization during the period of anthropogenic eutrophication, presumably in combination with reduced divergent selection between species. Hence, parts of the original species differentiation that might have evolved in response to adaptation to the selective pressure in the habitats of each species have potentially been lost as consequence of anthropogenic eutrophication.We here qualitatively describe patterns of allele frequency change, restraining from any qualitative evaluations due to our moderate sample sizes (N = 5-13) and lowsequencing coverage (average coverage around 3-4×; see Table S1 for mean coverage per individual and Table S2 for average coverage for each species and timepoint).A simulation study exploring the parameters close to our study (Lou et al., 2021).Hence, we acknowledge that our sampling design prevents exact estimates of allele frequencies, but may permit to observe general trends consistence across many loci widely distributed across the genome.(b) Allele frequencies of the one SNP within the two vgll3 paralogs in the whitefish genome, where at least one of the pre-eutrophication populations has a minor allele frequency above 0.15 and hence might reflect relevant genetic variation.vgll3 has been found to control age at maturity in Atlantic salmon (Barson et al., 2015).environments (Hoffmann et al., 2017).The erosion of a substantial amount of genomic variation within few generations in combination with the loss of potentially adaptive genomic differentiation, both evolved over thousands of years, demonstrates the sensitivity of evolutionary young adaptive radiations to environmental disturbance.Genetic erosion might have reduced the potential for resilience to future environmental change through a reduced evolutionary potential, increasing the extinction risk of each species. While Therefore, characterizing the genomic change of natural populations across periods of ecological disturbance is fundamental to enhance species conservation and advance our understanding of biodiversity dynamics. Understanding the genomic consequences of environmental change and its temporal dynamics heavily relies on a suitable baseline from before the onset of environmental disturbance (Jensen & Leigh, 2022).Increasing levels of ecosystem degradation on a global scale can result in lowered subjective threshold for acceptable environmental conditions.Without any historical records about the original condition of a given environment, new generations might consider the situation in which they have been raised as the appropriate baseline level (Soga & Gaston, 2018), a phenomenon termed the 'shifting baselines syndrome' (Pauly, 1995).Natural history collections, such as the one used here, can provide suitable baselines unaffected by anthropogenic influences and are therefore fundamental to counteract the shifting baselines syndrome.However, although it contributes disproportionally to biodiversity conservation and policy, the investment in generating long-term data is declining (Hughes et al., 2017). Thus, the generation of genomic data from historical samples representing an appropriate baseline can fundamentally improve our understanding of the evolutionary response of natural populations to anthropogenic disturbance and thereby advance the establishment of targeted and efficient conservation measures. we identified 12 individuals suggesting an erroneous species assignment (in the field) which were excluded from all subsequent analyses (one post-eutrophication C. wartmanni was genetically assigned to C. arenicolus, and another one to C. macrophthalmus; one post-eutrophication and six during-eutrophication C. macrophthalmus samples were genetically assigned to C. wartmanni, one during-eutrophication C. macrophthalmus was assigned to C. arenicolus and two pre-eutrophication C. macrophthalmus were assigned to C. gutturosus; see Figure S1 and Table Consistent with a decrease in population size, mean relatedness increased in all species over the period of eutrophication.All the species showed mean relatedness values below 0.01 before the start of F I G U R E 1 Total phosphate, nucleotide diversity, Tajima's D and relatedness over time.Total phosphate concentration over time (a), Watterson's theta (b) and Tajima's D (c), mean relatedness between individuals of each species and timepoint (d) and PCA based on genotype likelihoods separately plotted for each timepoint (e).(e) PCA showing the population structure of the Lake Constance whitefish radiation over time.These plots show the PCA from Figure S1, subsetted to the three sampling timepoints.Colours correspond to species (grey C. gutturosus, green C. macrophthalmus, blue C. wartmanni and orange C. arenicolus), see legend in panel (c). , while mean relatedness ranged between ~0.2 and ~0.28 in the post eutrophication population of the three extant species (Figure1d). F I G U R E 2 Genomic differentiation between cooccurring Alpine whitefish species before eutrophication of Lake Constance.(a) Genomic differentiation between C. wartmanni and all other species of the radiation.Differentiation (F ST value) at the time point before eutrophication is plotted for each site across chromosomes along the genome.The 50 most characteristic sites are highlighted in red.(b) Genomic differentiation between C. macrophthalmus and all other species of the radiation.(c) Genomic differentiation between C. arenicolus and all other species of the radiation.(d) Genomic differentiation between the extinct C. gutturosus and all other species of the radiation. FFREI I G U R E 3 Allele frequency trajectories at most characteristic sites over time.(a) The trajectories of the 50 most characteristic sites of C. wartmanni (top 50 highest F ST values when comparing C. wartmanni with all other species of the radiation at the time point before eutrophication).The allele frequencies of each position at the different sampling time-points are connected with a line.The colour corresponds to species (blue C. wartmanni, green C. macrophthalmus, orange C. arenicolus).(b) The trajectories of the 50 most characteristic sites of C. macrophthalmus.(c) The trajectories of the 50 most characteristic sites of C. arenicolus.(d) The trajectories of the 50 most characteristic sites of the extinct C. gutturosus.et al. correlation between estimated and true allele frequencies across different sample sizes and sequencing coverage, approximated a correlation of r 2 = 0.866 for a sample size of 10 and coverage of 4×, Vonlanthen et al. (2012) describe an increase in allelic richness of nine microsatellite markers over the period of eutrophication, using samples from the same historical scale collection as we used for this study, we here report substantial losses of genetic diversity based on whole-genome resequencing data.This difference might be explained by the different data types.Microsatellites are multiallelic markers with a high-mutation rate, resulting in fixed differences and private alleles between species or populations in a relatively short time span.However, in evolutionary young adaptive radiations, such as the Alpine whitefish radiation, species differentiation is mainly based on frequency shifts of thousands of SNPs.As a result of this difference between microsatellite and SNP markers, demographic-(such as population declines) or evolutionary processes (such as hybridization) can have different outcomes on diversity estimates based on the two different marker types.Hybridization between two species whose differentiation is based on moderate frequency shifts at many SNPs might have little impact on their nucleotide diversity, while allelic richness at microsatellites is greatly increased because new alleles were brought into the species that were beforehand private to the other species.If such a hybridization event is taking place during a period of weak population decline, nucleotide diversity at all SNPs might decrease (from demographic decline while hybridization has little effect), whilst allelic richness at microsatellite loci is increasing (because hybridization is bringing in more new alleles than are lost due to drift during population decline).The contrasting results between microsatellite loci, still often used in the context of conservation and management of natural populations and whole-genome resequencing data, highlight the importance of marker choice to draw valid and robust conclusions for the question of interest.Considering the unprecedented contemporary rates of habitat loss and species extinctions, mitigating the consequences of genome-wide losses of genetic variation is central for overcoming the current biodiversity crisis(Kardos et al., 2021) and only possible by making use of genomic data for conservation purposes(Supple & Shapiro, 2018).4.5 | The relevance of genomic long-term data for biodiversity conservationNatural populations need genomic diversity to maintain the evolutionary potential enabling a rapid evolutionary response to changing F I G U R E 4 Allele frequency trajectory for two potentially functionally important loci.(a) Allele frequencies of the SNP found to be significantly associated with gill-raker count byDe-Kayne et al. (2022).The legend shows the symbol and colour of each species, as well as the gill-raker count of each species (very low, low, high, very high) according toDe-Kayne et al. (2022) andVonlanthen et al. (2012).
2023-11-27T16:12:25.559Z
2023-11-23T00:00:00.000
{ "year": 2023, "sha1": "44f6efe26a1ca9d7c350a466870be65e22539529", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eva.13617", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e7f620213d99e186cacfecbb85d78dbacaae100", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
247971054
pes2o/s2orc
v3-fos-license
Quantitative MRI T2 relaxometry of knee joint in early detection of osteoarthritis Background: Magnetic resonance imaging (MRI) T2 is an advance modality for the early diagnosis of osteoarthritis. This study was performed to determine the MRI T2 relaxometry value of knee joint in early detection of osteoarthritis among suspected cases. Patients and methods: This observational study was conducted at Department of Radiology, Jinnah Postgraduate Medical Centre (JPMC), Karachi, Pakistan from 20 th September 2020 to 28 th February 2021. All patients aged 20-60 years of either gender suspected of knee osteoarthritis were consecutively enrolled. Osteoarthritis was confirmed based on Kellgren & Lawrence (KL) radiographic grading of 2-5. MRI T2 relaxometry was performed in all patients. Results: Of 102 patients, there were 67 (65.7%) males and 35 (34.3%) females. Mean age was 43.72 ±14.01 years. KL grading showed that KL grade 0 observed in 29 (28.4%), grade I in 13 (12.7%), grade II in 25 (24.5%), grade III in 30 (29.4%), and grade IV in 5 (4.9%) patients. The frequency of osteoarthritis was found in 60 (58.8%) patients. Mean MRI T2 value was found to be 94.12 ±16.32. Mean MRI T2 value was found significantly higher in patients with KL grade IV (109.89 ±5.38) followed by KL grade III (107.35 ±3.24), KL grade II (97.72 ±14.65), KL grade I (89.54 ±13.69), and KL grade 0 (76.65 ±10.56). (p-value<0.001) The findings of ROC curve showed that AUC was found to be 0.911 (0.85-0.97) (p-value<0.001). Conclusion: MRI T2 relaxometry is highly recommended for the prediction of osteoarthritis in suspected cases. INTRODUCTION Diagnosis of knee disorders has dramatically improved over the last decade through better imaging techniques. 1 Plain radiography, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Arthrography and state-of-the-art technology such as T2 Relaxometry, have all contributed to this improved knee imaging. 1,2 Knee x-rays are usually the initial imaging studies, especially when bone wear and tear are clinically suspected. 3 Knee joint is the body's biggest synovial joint with many articulating surfaces. Main constituents being the articulation between the femur and tibia, which is the weight carrier of the entire body weight and for the balance. This surface of the articulation has great chances of failure due to different causes. 4,5 MRI provides better imaging of the articular cartilage morphologically and compositionally in the knee using multi-planar capabilities, high spatial Conflict of interest: The authors declared no conflict of interest exists. Citation: Aneeta, Shoukat S, Kumar resolution without ionizing radiation, and superior tissue contrast. 6 In regular knee joint clinical evaluations, most departments tend to apply a 2D quick SE multiplane sequence, alone or in conjunction with a 3D GRE sequence to enhance cartilage evaluation. The combined application of high-resolution morphological imaging techniques and techniques of compositional imaging will result in increased MR imagery sensitivity in the early cartilage detection and increased usefulness in cartilage repair assessments. 7,8 This study aims to determine how osteoarthritis among suspected cases is accurately and early identified by MRI T2 relaxometry. The study will facilitate treating physicians and patients to achieve better diagnosis and improve disease management in the future. PATIENTS AND METHODS This descriptive cross-sectional study was conducted at Department of Radiology, Jinnah Postgraduate Medical Centre (JPMC), Karachi, Pakistan from 20 th September 2020 to 28 th February 2021. Ethical approval was obtained from the JPMC Committee and signed informed consent was obtained from all study participants prior enrollment in the study. All patients aged 20-60 years of either gender suspected of knee osteoarthritis were consecutively enrolled. Patients presented with deformity of the knee joint, total joint replacements, subchondral or stress fractures of the knee were excluded. Suspected cases were enrolled through Osteoarthritis Initiative (OAI) Incidence cohort criteria 9,10 and included at risk cases comprising: (i) Knee symptoms in the past 12 months, (ii) overweight or obesity, (iii) history of knee injury which would cause difficulty walking for at least a week, (iv) history of knee surgery, (iv) family history of osteoarthritis. The osteoarthritis was confirmed on the basis of Kellgren & Lawrence (KL) radiographic grading of 2-5. Epi info sample size calculator is used for the estimation of sample size taking confidence interval 95%, margin of error 9%, reported prevalence of osteoarthritis on MRI T2 relaxometry 30% 8 . The estimated sample size came out to be 102 suspected cases. The Canon MRI System ® (1.5T) whole body magnetic resonance scanner was used with the gradient strength of 40 mT/m and with use of an eight-channel phased-array knee coil. No specific activity protocol was used prior to MRI. The MR examinations was scheduled so that there was no more than a 5 10minute wait for the procedures. All patients were able to perform routine daily activities and walk to the MRI suite prior to imaging. The imaging protocol consisted of an axial T2-weighted, spin-echo multi echo sequence performed with the following parameters: Time of repetition (TR): 1200 msec, Time of Echo, 18 msec and 36 msec with a field of view (FOV) of 150 mm x 150 mm, matrix size 256 x 256, pixel size 0.58 mm, number of acquisitions (NAQ) per TR: 01, number of slices: 10, slice thickness: 3 mm, interslice gap of 0.6 mm and total acquisition time: 16 mins. Post processing was performed on Olea sphere workstation using relaxometry tool, 2-patellar cartilage depending on available cartilage tissue volume. Bony structures were well negotiated and ruled (Figures 1 and 2). SPSS version 24 was used for data analysis. Mean and standard deviation was calculated for quantitative variables like age, height, weight, BMI, and quantitative T2 relaxometry value. Frequencies and percentages were calculated for gender, Native knee symptoms in the past 12 months, Overweight or obesity, history of knee injury, family history of osteoarthritis, lifestyle factors such as occupational risk, and KL grading. Inferential statistics were explored using chi-square test and One-Way ANOVA test. A p-value 0.05 was considered as significant. Moreover, receiver operative curve (ROC) was also applied to find out the diagnostic performance of MRI T2 in detection of osteoarthritis. (Figure 3). Findings of ROC curve showed that AUC was found to be 0.911 (0.85-0.97) (p-value <0.001) (Figure 4). DISCUSSION This study was conducted with the aim to determine the findings of quantitative MRI T2 relaxometry of knee joint in early detection of osteoarthritis. For this purpose, all patients with knee symptoms in the past 12 months, overweight or obesity, history of knee injury which would cause difficulty walking for at least a week, history of knee surgery, and family history of osteoarthritis, were enrolled. While osteoarthritis was confirmed on the basis of KL radiographic grading of 2-5. The findings of the current study demonstrate that KL grade 0 was observed in 28.4%, grade I in 12.7%, grade II in 24.5%, grade III in 29.4%, and grade IV in4.9% patients. In particular, the findings the current study showed that KL grading of more than equal to 2 (osteoarthritis) was observed in 58.8% patients. Somewhat similar findings were observed in a previous © 2021 Authors J Fatima Jinnah Med Univ 2021; 15: 127-131 study as well in which more than equal to 2 KL grading was observed in 44.2% of the patients. 11 However, a considerably low number of patients with KL 2 gradings were reported in a study by Joseph and coauthors. 12 Significantly higher frequency of osteoarthritis was found among patients with more than forty-five years of age as compared to less than equal to forty-five years of age patients in this study. Similarly, significantly higher frequency of osteoarthritis was found among males as compared to females, i.e., 47.8% and 80.0% and significantly higher frequency of osteoarthritis was found among patients with lifestyle risk factors such as occupational risk as compared to patients without lifestyle risk factors such as occupational risk, i.e., 82.1% and 50.0%. According to the current study findings, the mean MRI T2 value was found to be approximately ninetyfour. The mean MRI T2 value was found significantly higher in patients with KL grade IV followed by KL grade III, KL grade II, KL grade I, and KL grade 0. Pedoia and coworkers reported higher frequency of osteoarthritis is males than that of females. 11 However, other variables like age, BMI, and KL gradings were found to be non-significant. Liebi and colleagues conducted a study on predictive ability MRI T2 in early diagnosis of knee osteoarthritis. 13 Their findings revealed that baseline T2 values in all compartments except the medial tibia were significantly higher in knees that developed OA compared with controls and were particularly elevated in the superficial cartilage layers in all compartments. There was an increased likelihood of incident knee osteoarthritis associated with higher baseline T2 values, particularly in the patella, medial femur, lateral femur, and lateral tibia. Further studies also reported that higher MRI T2 values has high efficacy of predicting degenerative changes of osteoarthritis. 14-16 A previous report observed that T2 measurements are sensitive to the earliest changes in the biochemical cartilage composition that are precursors to the development of radiographic disease and through early diagnosis may play a role in efforts to support a paradigm shift from palliation of late osteoarthritis towards prevention of disease. 13 The findings of current study could be highlighted in the light of number of limitations. Firstly, this was carried out in a single centre with limited number of sample size. As this study was carried out during the coronavirus disease (COVID)-19 pandemic, the inclusion of the patients was very challenging. Secondly, this study included all clinically suspected knee osteoarthritis patients. Earlier majority of the studies were conducted analytically or longitudinally to assess the degenerative changes in knee osteoarthritis. 17,18 Lastly, previous study has reported findings on the basis of different anatomical locations. 8 However, current study failed to provide findings of various anatomical locations. Despite of these limitations, this study is of great importance because as per our knowledge, this is a first kind of study from Pakistan that has reported how osteoarthritis among suspected cases is accurately identified by MRI T2 relaxometry. MRI T2 measures early degenerative changes in knee cartilage that occur prior to macroscopic cartilage defects and thinning. 19 Thus, a composite model consisting of clinical risk factors and imaging data may help identify subjects at high risk for osteoarthritis. Further large scale longitudinal studies are needed to assess the MRI T2 relaxometry at multiple time duration to monitor the sequence of pathological events in cartilage and other tissues leading to onset of osteoarthritis. Such studies are highly recommended in our part of the world as due to social and financial constraints, most of the people present late. Early diagnosis will eventually help in improve patient management and prognosis. CONCLUSION MRI T2 relaxometry may be recommended for the prediction of osteoarthritis in suspected cases. In addition, this modality could be used for the early diagnosis of knee osteoarthritis in high-risk patients such as older age, overweight, and/or lifestyle risk factors.
2022-04-06T15:27:02.167Z
2022-04-04T00:00:00.000
{ "year": 2022, "sha1": "1951f874a484285effcf2e18912b7eae0d2a2ee2", "oa_license": "CCBYNC", "oa_url": "https://www.jfjmu.com/index.php/ojs/article/download/840/596", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0f43c573a1bc277f6bf863af559b77aa7663f398", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
11288607
pes2o/s2orc
v3-fos-license
Oxidative Stress Activates the Transcription Factors FoxO 1a and FoxO 3a in the Hippocampus of Rats Exposed to Low Doses of Ozone The exposure to low doses of ozone induces an oxidative stress state, which is involved in neurodegenerative diseases. Forkhead box O (FoxO) family of transcription factors are activated by oxidative signals and regulate cell proliferation and resistance to oxidative stress. Our aim was to study the effect of chronic exposure to ozone on the activation of FoxO 1a and FoxO 3a in the hippocampus of rats. Male Wistar rats were divided into six groups and exposed to 0.25 ppm of ozone for 0, 7, 15, 30, 60, and 90 days. After treatment, the groups were processed for western blotting and immunohistochemistry against FoxO 3a, Mn SOD, cyclin D2, FoxO 1a, and active caspase 3. We found that exposure to ozone increased the activation of FoxO 3a at 30 and 60 days and expression of Mn SOD at all treatment times. Additionally, increases in cyclin D2 from 7 to 90 days; FoxO 1a at 15, 30, and 60 days; and activate caspase 3 from 30 to 60 days of exposure were noted. The results indicate that ozone alters regulatory pathways related to both the antioxidant system and the cell cycle, inducing neuronal reentry into the cell cycle and apoptotic death. Introduction Neurodegenerative diseases, such as Alzheimer's, Parkinson's, and Creutzfeldt-Jakob disease, are a set of disorders that affect specific groups of neurons, inducing chronic, progressive, and irreversible dysfunction [1]. These pathologies share chronic features, that is, abnormal protein aggregation, excitotoxicity, neuroinflammation, and oxidative stress [2]. Several conditions have been associated with redox imbalance, and aging is one of the most important risk factors for the development of neurodegenerative diseases [3]. Many evidences suggest that the central nervous system (CNS) is highly sensitive to oxidative stress, because of its high content of unsaturated phospholipids, its high metabolic rate, and low content of some antioxidant enzymes, such as catalase [4,5], the hippocampus, substantia nigra, and striatum being the most sensitive structures [6]. Additionally, exposure to air pollution may contribute to the development of neurodegenerative diseases [7]. Increasing exposure to environmental contaminants has attracted the attention of researchers because it plays an important role in the risk factors associated with mortality and accounts for 2.5% of all deaths in developing countries [8]. The effect of exposure to air pollution has been extensively studied in the respiratory and cardiovascular systems; however, its impact on the central nervous system (CNS) is poorly understood. The forkhead box (Fox) family of transcription factors regulates a myriad of cellular functions. These transcription factors are involved in the regulation of metabolism, cell proliferation, resistance to stress, immune system regulation, and apoptosis [9]. Activation of these transcription factors can be regulated by growth factors such as insulin-like growth factor (IGF), which promotes FoxO phosphorylation at its C-terminal end via protein kinase B (AKT/PKB), resulting in nuclear exclusion and a subsequent loss of transcriptional activity [10]. Reactive oxygen species (ROS) may also induce the activation of these transcription factors through posttranslational modifications such as phosphorylation via Figure 1: Effect of ozone exposure on pFoxO 3a in the rat hippocampus. (a) The micrographs show cells that are positive for phosphorylated FoxO 3a (pFoxO 3a) (green) in the dentate gyrus (DG) of rats exposed to ozone-free air (control) or 7, 15, 30, 60, and 90 days of ozone. The images show a progressive increase in the immunoreactivity against phosphorylated FoxO 3a from 15 days to 60 days of exposure to ozone. pFoxO 3a shows nuclear localization from 15 days to 60 days of ozone exposure and at 90 days is located in the cytoplasm. Insets are a zoom from selected areas in the same picture. (b) A representative western blot shows the contents of total FoxO 3a and pFoxO 3a in the homogenated hippocampi of rats exposed to ozone for different durations. The normalized graph shows densitometry values presented as the mean ± SD ( * < 0.05) ( = 6). the JNK pathway [11], methylation, and ubiquitination [12]. Therefore, the FoxO family is considered to play a central role in redox signaling [13]. These transcription factors have a highly conserved DNA binding domain (approximately 100 amino acids) and are tissue specific; thus, the activation pathway determines the functional effects of FoxOs in a particular cell [14]. FoxO 1a participates in the regulation of gluconeogenesis and glycogenolysis by controlling the expression of glucose-6-phosphatase and phosphoenolpyruvate carboxykinase, and it also participates in cell cycle regulation by downregulating the transcription of cyclin D [15,16]. In addition FoxO 1a is involved in resistance to oxidative stress via the upregulation of antioxidant enzymes such as catalase and Mn SOD among FoxO 3a [4,17]; FoxO 3a is also linked to apoptotic processes by modulating the expression of proapoptotic (e.g., Bim and PUMA) and antiapoptotic (FLIP) proteins [10]. Our group has developed a murine model of neurodegeneration by chronic exposure to low doses of ozone, a major component of air pollution. With this model, it was shown that chronic exposure to ozone generates a state of oxidative stress that induces damage in rat brains [14]. Previously, we reported an increase in oxidized lipids and proteins in addition to a loss of the brain repair processes in the hippocampi and substantia nigra of the animals throughout the treatment [6,18]. The aim of this work was to study the effect of chronic exposure to ozone on cell signaling related to oxidative Exposure to Ozone. A total of 72 male Wistar rats (250-300 g) were maintained individually in acrylic boxes with free access to water and food. The animals were randomly divided into six groups: (1) control (exposure to ozonefree air); (2) exposure to ozone for 7 days; (3) exposure to ozone for 15 days; (4) exposure to ozone for 30 days; (5) exposure to ozone for 60 days; and (6) exposure to ozone for 90 days ( = 12 per group). No differences between the control groups were detected at any time during ambient air exposure; therefore, we chose the 30-day air-exposure group as a control. Exposure to ozone was performed according to the method previously described by Rivas-Arancibia et al. [18]. Briefly, rats were placed in a transparent acrylic chamber that was then sealed, and the rats were exposed to ozone (0.25 ppm) for 4 hours daily over the period of time indicated for each treatment (7, 15, 30, 60, or 90 days). The ozone concentration in the chamber was monitored throughout the experiment with an ozone monitor (PCI Ozone and Control Systems, West Caldwell, NJ). After the exposure, the rats were deposited in individual boxes. Control rats were subjected to the same protocol with exposure to ambient airflow. All experiments were conducted according to the guidelines of the National Institutes of Health and the Mexican Official Standard (NOM-062-ZOO 1999). Western Blot. Two hours after the end of treatment, the animals were deeply anesthetized with pentobarbital sodium (50 mg/kg) and sacrificed. The hippocampi were dissected, homogenates were prepared in buffer, and protein was quantified by the Bradford method. The samples were mixed with 5X loading buffer (0.5 M Tris [pH 8.5], 10% SDS, 30% glycerol, 0.1% bromophenol blue, and 100 mM dithiothreitol) and boiled for 10 minutes. Subsequently, the proteins were separated by 10% denaturing polyacrylamide gel electrophoresis and transferred to a polyvinylidene fluoride membrane (PVDF; Immobilon-P Transfer Membranes, Millipore Corporation, Billerica, MA, USA). The membranes were blocked for 2 hours at room temperature with 5% skim milk in Tris phosphate buffer/Tween 20 (0.1%). After washing, the membranes were incubated overnight with primary antibodies (anti-rabbit FoxO 1a, anti-rabbit FoxO 3a, antirabbit GAPDH, anti-mouse cyclin D2, and anti-rabbit Mn SOD) at 1 : 1000 dilutions. Subsequently, the membranes were washed and incubated for 2 hours with the corresponding HRP-conjugated secondary antibody (anti-rabbit IgG or antimouse IgG) diluted 1 : 10,000. The chemiluminescence signal was detected with Immobilon Western Chemiluminescent HRP Substrate (Millipore Corporation, Billerica, MA, USA). Immunohistochemistry. Two hours after the last exposure to ozone, the rats ( = 6 per group) were deeply anesthetized and perfused with 4% paraformaldehyde. The brains were dissected and placed in the same fixative solution for 24 hours at 4 ∘ C. Subsequently, conventional histological techniques were performed to obtain tissue embedded in paraffin. Sagittal sections (5 m thickness) were cut, and those that contained the hippocampus were used for in the dentate gyrus (DG) of rat hippocampi exposed to ozone for different periods of time (7,15,30,60, and 90 days) or exposed to ozone-free air (control). An increase from 7 days to 90 days of ozone exposure was observed. This protein shows nuclear localization from 30 days to 90 days of ozone exposure. Insets are a zoom from selected areas in the same picture. (b) A representative western blot illustrating changes throughout the treatment. The graph shows densitometry values presented as the mean ± SD ( * < 0.05) ( = 6). immunohistochemistry. The slices were deparaffinized with xylene and rehydrated; subsequently, the slices were washed with phosphate-buffered saline (PBS; 50 mM sodium phosphate, 0.15 M sodium chloride, pH 7.4) and incubated with 2% fatty acid-free bovine serum albumin (fraction V; MP Biomedical, LLC., USA) for 30 minutes to prevent nonspecific binding. Subsequently, the samples were permeated with 0.2% Triton X100 in PBS for 10 minutes before overnight incubation with the primary antibody at 4 ∘ C. Lastly, the preparations were incubated with the corresponding FITCconjugated secondary antibody. Additionally, the slices were counterstained with Vectashield with DAPI (Vector Labs, CA, USA) for nuclei staining. Confocal micrographs were obtained on a Leica microscope (Leica TCS-SP5). Statistical Analysis. The data are presented as the mean ± SD and were analyzed using one-way ANOVA followed by Dunnett post hoc to evaluate the statistical significance. A value < 0.05 was considered as a threshold for statistical significance. Activation of FoxO 3a. Activation was evaluated by identifying the phosphorylated form of FoxO 3a (pFoxO 3a). We detected a gradual increase in the immunoreactivity in the dentate gyrus (DG) of the rat hippocampus. This increase was evident from the 15th to the 60th day of exposure to ozone (Figure 1(a)). The photomicrographs illustrate that this protein is mainly localized in the nucleus of the DG cells (Figure 1(a)). Moreover, western blot analysis showed a significant increase in the pFoxO 3a protein after 30 days and 60 ( * < 0.05) days of exposure to ozone (relative to the control). There was also an increase with respect to the Oxidative Medicine and Cellular Longevity Figure 4: Effect of ozone exposure on pFoxO 1a immunoreactivity in the rat hippocampus. (a) The photomicrographs show pFoxO 1apositive cells (green) in the dentate gyrus (DG) region. Rats were exposed to ozone-free air (control) or 7, 15, 30, 60, and 90 days with ozone. A progressive increase can be observed in the immunoreactivity against pFoxO 1a from 15 days to 60 days of ozone exposure. Insets are a zoom from selected areas in the same picture. (b) A representative western blot shows the content of the total FoxO 1a and pFoxO protein in the hippocampi of rats exposed to ozone for different times. The graph shows densitometry values presented as the mean ± SD ( * < 0.05). control at 90 days of exposure; however, this increase was not statistically significant. Mn SOD Protein Expression. Mn SOD protein expression was examined by western blot. This protein showed a gradual, statistically significant increase as early as 15 days and as late as 60 days ( * < 0.05) of exposure to ozone compared with the control and increase at 30 and 60 days compared with 7 days ( * * < 0.05) (Figure 2). Analysis of Cyclin D2. As result of ozone exposure, a gradual increase in cyclin D2-immunoreactive cells was observed; after 30 days of treatment, an increase in cyclin D2 nuclear translocation in DG cells of rat hippocampi was also observed (Figure 3(a)). Western blot analysis showed a statistically significant increase in cyclin D2 protein at 7, 30, 60, and 90 ( * < 0.05) days of ozone treatment compared with the control group (Figure 3(b)). Activation of FoxO 1a. Cells that were immunoreactive to FoxO 1a were observed in the hippocampi of rats after all ozone treatments, and immunoreactive cells showed a gradual increase with increasing ozone exposure time (Figure 4(a)). Western blot analysis showed significant increases in FoxO 1a protein at 15, 30, and 60 ( * < 0.05) days of exposure to ozone compared with the control (Figure 4(b)). exposure and a decrease at 90 days ( * < 0.05) of ozone exposure with respect to total caspase 3 ( Figure 5). Discussion Based on the present results, we have demonstrated increased activation of FoxO 3a via the identification of its phosphorylated form (pFoxO 3a; Figure 1). According to a previous report by Kops et al. [19], an increase in pFoxO 3a may be associated with an increased amount of Mn SOD, as shown in the present study ( Figure 2). This finding suggests that FoxO 3a has a regulatory role in response to increasing oxidative stress generated by exposure to low doses of ozone. Previous experiments by our group have shown that the activity of this enzyme does not correlate with the increased expression demonstrated here; this may suggest a blockade of activity caused by oxidative damage to the enzyme structure [20]. Numerous studies have demonstrated that neuronal damage, such as that which occurs in Alzheimer's disease (AD), increases the expression and activation of cell cycle regulatory proteins [21]. It has been shown that D-type cyclins play an important role in memory, learning, and neuronal plasticity processes [22]. These processes are linked to the formation of new neurons from neuroblasts in the DG of the hippocampus [23]. However, the expression of these proteins in mature neurons activates death pathways in the cell [24]. The damage generated by chronic exposure to low doses of ozone induced an increase in the expression of cyclin D2 in the hippocampi of the treated animals ( Figure 3). The results show an increase in the expression of this protein beginning at 7 days and continuing throughout all treatments; however, cyclin D2 only translocates into the nucleus after 30 days of exposure to ozone. This translocation may be associated with the increase in active caspase 3 at 30 and 60 days of exposure to ozone (Figure 5), suggesting the activation of apoptotic pathways. The increase in cyclin D2 after short periods of ozone exposure (7,15, and 30 days) could be associated with the neurogenesis process. We previously reported that, under the same conditions, the number of neuroblasts increases from 7 to 30 days of treatment; however, after 30 days of exposure to ozone, there is an increase in the death of neuroblasts and therefore a loss of neuronal repair processes [18]. Under normal conditions, mature neurons inhibit signals triggering cell cycle reentry through the phosphorylation of FoxO 1a (pFoxO 1a) because this transcription factor increases the expression of p27, which blocks the synthesis of cyclin D2 [16]. Thus, FoxO 1a functions as a tumor suppressor [25]. The present results show a significant increase in pFoxO 1a at 15, 30, and 60 days of exposure to ozone; however, this increase does not correlate with its repressive effect on cyclin D2. In addition to the increased cyclin D2 in the nucleus and the increased expression of active caspase 3, which were demonstrated in this work, we previously reported increased translocation of p53 to the nucleus, which is a signal that activates apoptotic death in mature neurons [18]. Many authors have reported that both p53 and FoxO 1a transcription factors are involved in the same signaling pathways related to survival and cell death. These pathways are well regulated under redox balance; however, the loss of this balance alters these pathways. After DNA damage, p53 could inhibit the action of FoxO 1a [26]. Thus, the loss of redox balance increases cell death, likely by activating many pathways that induce apoptosis when mature neurons suffer cell damage. Similar phenomena could occur in neurodegenerative diseases associated with a chronic loss of redox balance. In summary, the exposure to low doses of ozone induced an increase in the expression of FoxO 1a, FoxO 3a, Mn SOD, Oxidative Medicine and Cellular Longevity cyclin D2, and caspase 3, from 7 to 60 days of exposure; the highest increase was observed at day 30 of exposure, while at 60 and 90 days the response showed a tendency to decrease. This behavior could be the result of the loss in the antioxidant systems regulation and/or an increase in neuronal death by apoptosis. Activation of FoxO 3a is associated with an increase in the expression of Mn SOD; however, this increase is unrelated to an increase in Mn SOD activity. On the other hand, ozone also increases the expression and nuclear translocation of cyclin D2, thereby activating the cell cycle, which triggers death signals in mature neurons. This process may be associated with increased expression of active caspase 3. Under normal redox balance, increased expression of cyclin D2 could be blocked indirectly by activation of FoxO 1a; however, an imbalance in the redox state could modify this response ( Figure 6). Conclusions We conclude that the oxidative stress generated by chronic exposure to low doses of ozone alters regulatory pathways that attempt to counteract the production of ROS via upregulation of Mn SOD; however, ozone exposure does not affect the activity of these enzymes. On the other hand, the loss of redox balance causes alterations in intracellular signaling pathways, increases oxidative damage, and consequently induces the aberrant cell cycle reentry of mature neurons, which ends in the induction of catastrophic apoptosis. The failure of these protection mechanisms, together with neurodegeneration processes, generates a vicious cycle of degeneration mechanisms.
2016-05-12T22:15:10.714Z
2014-05-22T00:00:00.000
{ "year": 2014, "sha1": "76fc280106509ed66cdebeca3fded8a3604f48e0", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2014/805764.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b84a26a8e932951f4b65fad664b0e34d1907e98", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248959978
pes2o/s2orc
v3-fos-license
New n-p Junction Floating Gate to Enhance the Operation Performance of a Semiconductor Memory Device To lower the charge leakage of a floating gate device and improve the operation performance of memory devices toward a smaller structure size and a higher component capability, two new types of floating gates composed of pn-type polysilicon or np-type polysilicon were developed in this study. Their microstructure and elemental compositions were investigated, and the sheet resistance, threshold voltages and erasing voltages were measured. The experimental results and charge simulation indicated that, by forming an n-p junction in the floating gate, the sheet resistance was increased, and the charge leakage was reduced because of the formation of a carrier depletion zone at the junction interface serving as an intrinsic potential barrier. Additionally, the threshold voltage and erasing voltage of the np-type floating gate were elevated, suggesting that the performance of the floating gate in the operation of memory devices can be effectively improved without the application of new materials or changes to the physical structure. Introduction Memory devices, one of the most typical and popularly used electronic devices, generally comprise a plurality of gate structures, which include a control gate and a floating gate [1,2]. The floating gate is a conductive layer normally fabricated from polysilicon that is positioned between the control gate and a silicon substrate [1,2]. The floating gate is not attached to any electrodes or power sources and is generally surrounded by an insulation material [1,2]. The operation of the memory cells is dependent upon the charge stored in the floating gate at the threshold voltage to represent information in these memory devices [3,4]. The performance of the memory cells is determined by the programming speed, which is dominated by the speed of the erasing and writing operations [1,2]. The speed is basically limited by the rate at which electrons can be pumped into (writing) and out of (erasing) the devices without causing damage to the device [1,[5][6][7]. Typically, writing and erasing operations must be capable of operating within 1 ms at a specified applied voltage [1,6,[8][9][10][11]. Aiming at a higher capability but a smaller chip size, the semiconductor industry has been increasingly driven towards smaller and more numerous electronic devices, including memory cells [2,12,13]. To reduce the size and accordingly increase the number of such devices, while simultaneously maintaining or even improving their respective capabilities, the size of components and the distance between such components need to be reduced [2,14,15]. However, as the cell size is reduced, some other issues arise that prevent a further reduction in size [15,16]. One of these issues is that charge leakage from the floating gate may increase, thereby deteriorating the performance of the devices as the individual layers of the gate structures are made smaller and placed closer to each other [15]. In particular, the tunneling oxide will be more seriously damaged with more programming and erasing sequences, resulting in more charge leakage [15]. In order to overcome the issue of charge leakage, many device structures have been proposed, e.g., SONOS, BE-SONOS, TAHOS and 3D FLASH [6,[17][18][19][20]. The 3D NAND FLASH structure was proposed as a solution when 2D NAND FLASH reached the scaling limit of a 15 nm process node [21]. Furthermore, the ReRAM [8,22], PCRAM [23,24], FeRAM [25,26] and MRAM [27,28] devices have also attracted much attention in the past two decades as promising candidates for the next generation of nonvolatile memory cells with improved performance. However, new semiconductor devices with markedly shrunken gate structures and reduced charge leakage that do not sacrifice their performance or suffer from environmental contamination are still elusive. Hence, in this study, two new floating gate structures, including a p-n junction and an n-p junction, were designed, investigated and processed on 300 mm wafers. In these new designs, no extra new material needs to be employed, no new process needs to be developed and no contamination risk needs to be considered when the devices are processed at the semiconductor manufacturing factory. By forming an n-p junction instead of a p-n junction in the first conductive layer (the floating gate), the charge leakage across the second dielectric layer (the inter-polysilicon dielectric layer) may be reduced. This n-p junction interface is anticipated to provide an intrinsic potential barrier to inhibit the leakage path, successfully reducing the charge leakage and enlarging the programming and erasing window. Additionally, upon the reduction of the charge leakage across the second dielectric layer, the second dielectric layer can be made thinner and/or even be completely removed from wrapping the first conductive layer. The gate structure can thereby be made more compact, allowing a smaller semiconductor device without sacrificing the performance of the device. Device Fabrication NAND FLASH memory devices with two new floating gate structures were fabricated on p-type 300 mm silicon (Si) wafers with n + junctions. As shown in Figure 1, the memory devices comprise the Si substrate, the first dielectric layer (tunneling oxide, denoted as TUN OX) disposed along the substrate, and the first conductive layer (floating gate, FG) disposed along the first dielectric layer (Figure 1a,c, schematically illustrated from the Xand Y-direction cross-sections, respectively). The second dielectric layer (inter-polysilicon dielectric, IPD) is disposed along the sidewall of the first conductive layer, and the second conductive layer (control gate, CG, such as n-type polysilicon) is afterwards deposited. Two new types of the first conductive layer, i.e., the floating gate, were proposed, including the pn-type (a bottom "p + " region followed by a top "n + " region) polysilicon and the np-type (bottom "n + " followed by top "p + ") polysilicon, for which a high-temperature chemical vapor deposition (CVD) boron-doping polysilicon process and a high-temperature furnace phosphorous-doping polysilicon process were applied at 500 • C, in sequence or vice versa. The thickness ratio of the bottom-to-top regions of the pn-type or np-type polysilicon was designed to be around 1:3. For comparison, a conventional floating gate (the control split) was also prepared, with single n + polysilicon as the first conductive layer. The concentration of dopants in the n-type and p-type polysilicon was around 1 × 10 19 cm −3 and 1 × 10 21 cm −3 , respectively. Characterization and Measurement Thin foils (cross-sectional) of the memory devices around the floating gates were cut by using a focused ion beam system (USA, FIB, FEI Expida1265) and milled with an ultralow current, and the microstructure was observed by using a transmission electron microscope (Netherlands, TEM, FEI Osiris). The depth profile of elemental compositions along the floating gates for understanding the distribution of dopants was determined by using a secondary ion mass spectrometer (France, SIMS, AMETEK ims-6f). The sheet resistance (Rs) of the floating gates, programing threshold voltage (Vth) and erasing voltage (Ver) were measured by using a WAT system (USA, Keysight, 4082F). The charge simulation of the floating gates was performed by the TCAD (Technology Computer-Aided Design). Figure 1b shows the cross-sectional TEM microstructure of the memory cells around the floating gates with np-type polysilicon from the X-directional view. Clearly, the tunneling oxide layer is disposed between the floating gates and the substrate, and the inter- Characterization and Measurement Thin foils (cross-sectional) of the memory devices around the floating gates were cut by using a focused ion beam system (USA, FIB, FEI Expida1265) and milled with an ultralow current, and the microstructure was observed by using a transmission electron microscope (Netherlands, TEM, FEI Osiris). The depth profile of elemental compositions along the floating gates for understanding the distribution of dopants was determined by using a secondary ion mass spectrometer (France, SIMS, AMETEK ims-6f). The sheet resistance (R s ) of the floating gates, programing threshold voltage (V th ) and erasing voltage (V er ) were measured by using a WAT system (USA, Keysight, 4082F). The charge simulation of the floating gates was performed by the TCAD (Technology Computer-Aided Design). Figure 1b shows the cross-sectional TEM microstructure of the memory cells around the floating gates with np-type polysilicon from the X-directional view. Clearly, the tunneling oxide layer is disposed between the floating gates and the substrate, and the interpolysilicon dielectric layer is uniformly deposited on the floating gates. The image contrast indicates two regions in the floating gates: the bright region at the top and the dark region at the bottom, and the thickness ratio of the bottom-to-top regions is roughly estimated to be around 1:3. As further illustrated in Figure 1d, the SIMS depth profile along the floating gate confirms four regions of elemental distribution along the floating gate, from top to bottom: (1) the top p + polysilicon for a thickness of about 60 nm, with a silicon element; Microstructure and Chemical Composition (2) the bottom n + polysilicon for 20 nm, with silicon and a high concentration of phosphorous dopants; (3) the tunneling oxide and (4) the silicon substrate. It was noted that in region (1), boron dopants were not present due to the improper collection condition of light-ionized boron signals from the uneven film structure of the sample instead of a planar/blanket one. However, the gradually dropping intensity of silicon might reveal the existence of other elements that were very likely boron. Figure 2 presents the cumulative probability plot and box plot of sheet resistance for three different floating gates, including the control split and the new pn-type and np-type floating gates. Clearly, the sheet resistance of the new floating gates was higher than that of the control split, (i.e., about 1.7 times for the pn-type floating gates and 2.1 times for the np-type floating gates), which was plausibly caused by the formation of a depletion zone and the narrowed channels for current flow. When a forward bias was applied to the np-type floating gate, or a reverse bias was applied to the pn-type floating gate, a depletion zone of carriers would be formed at the n-p or p-n junction interface [28][29][30][31][32], leading to an open circuit at the bottom region of the floating gates. Current flow was therefore allowed only through the top p + or n + polysilicon paths, respectively, and the narrowed channel would thus result in increased resistance, particularly for the np-type floating gates, as the mobility of holes in the p + polysilicon path was lower than that of electrons in the n + polysilicon path [29,30]. polysilicon dielectric layer is uniformly deposited on the floating gates. The image contrast indicates two regions in the floating gates: the bright region at the top and the dark region at the bottom, and the thickness ratio of the bottom-to-top regions is roughly estimated to be around 1:3. As further illustrated in Figure 1d, the SIMS depth profile along the floating gate confirms four regions of elemental distribution along the floating gate, from top to bottom: (1) the top p + polysilicon for a thickness of about 60 nm, with a silicon element; (2) the bottom n + polysilicon for 20 nm, with silicon and a high concentration of phosphorous dopants; (3) the tunneling oxide and (4) the silicon substrate. It was noted that in region (1), boron dopants were not present due to the improper collection condition of light-ionized boron signals from the uneven film structure of the sample instead of a planar/blanket one. However, the gradually dropping intensity of silicon might reveal the existence of other elements that were very likely boron. Figure 2 presents the cumulative probability plot and box plot of sheet resistance for three different floating gates, including the control split and the new pn-type and np-type floating gates. Clearly, the sheet resistance of the new floating gates was higher than that of the control split, (i.e., about 1.7 times for the pn-type floating gates and 2.1 times for the np-type floating gates), which was plausibly caused by the formation of a depletion zone and the narrowed channels for current flow. When a forward bias was applied to the nptype floating gate, or a reverse bias was applied to the pn-type floating gate, a depletion zone of carriers would be formed at the n-p or p-n junction interface [28][29][30][31][32], leading to an open circuit at the bottom region of the floating gates. Current flow was therefore allowed only through the top p + or n + polysilicon paths, respectively, and the narrowed channel would thus result in increased resistance, particularly for the np-type floating gates, as the mobility of holes in the p + polysilicon path was lower than that of electrons in the n + polysilicon path [29,30]. As illustrated in the band diagrams of the neutral and charged states of these three floating gates in Figure 3a,c, different band structures are expected. For the conventional n-type polysilicon floating gate (the control split, Figure 3a) at a programing voltage (positive bias, ΔV) applied to the control gate, the energy band near the control gate will bend downward for ΔV to form a channel near the tunneling oxide for carriers to tunnel through the tunneling oxide into the floating gate for programming [1,2]. The charge in the floating gate depends on the gate coupling ratio (GCR) to influence the efficiency of the device programming [13,33]. In comparison, for the pn-type polysilicon floating gate As illustrated in the band diagrams of the neutral and charged states of these three floating gates in Figure 3a,c, different band structures are expected. For the conventional ntype polysilicon floating gate (the control split, Figure 3a) at a programing voltage (positive bias, ∆V) applied to the control gate, the energy band near the control gate will bend downward for ∆V to form a channel near the tunneling oxide for carriers to tunnel through the tunneling oxide into the floating gate for programming [1,2]. The charge in the floating gate depends on the gate coupling ratio (GCR) to influence the efficiency of the device programming [13,33]. In comparison, for the pn-type polysilicon floating gate (Figure 3b) and the np-type polysilicon floating gate (Figure 3c) at a thermal equilibrium state, the Fermi level (E f ) is close to the valence band in the p + region (conduction by holes) and close to the conduction band in the n + region (conduction by electrons). At a constant Fermi level, the distributions of carriers as well as the energy levels of the conduction band (E c ) and valence band (E v ) are thus different in the p + and n + regions at the neutral state, and a depletion zone (a thin region with very few carriers) of high electrical resistance will accordingly be formed at the p-n or n-p junction interface [29][30][31][32]. Sheet Resistance and Charge Materials 2022, 15, x FOR PEER REVIEW 5 of 10 ( Figure 3b) and the np-type polysilicon floating gate (Figure 3c) at a thermal equilibrium state, the Fermi level (Ef) is close to the valence band in the p + region (conduction by holes) and close to the conduction band in the n + region (conduction by electrons). At a constant Fermi level, the distributions of carriers as well as the energy levels of the conduction band (Ec) and valence band (Ev) are thus different in the p + and n + regions at the neutral state, and a depletion zone (a thin region with very few carriers) of high electrical resistance will accordingly be formed at the p-n or n-p junction interface [29][30][31][32]. (c) (f) When a programming voltage ∆V is applied to the control gate, the energy band bends downward, and the carriers will tunnel through the tunneling oxide into either the pnor the np-type floating gate in the same way as the conventional floating gate. However, owing to the different space charge distributions in the p + and n + regions, electrons stay mainly in the n + region [29,34]. The carriers (electrons) into the pn-type floating gate will induce a reverse bias in the p-n junction to cause the expansion of the depletion zone and the shrinkage of the top n + region for carrier storage, therefore reducing the total stored charge. On the contrary, in the np-type floating gate case, a forward bias will lead to the contraction of the depletion zone and the extension of the bottom n + region for carrier storage, which in turn increases the total stored charge. In addition, because the bottom n + region is close to the tunneling oxide channel and has a low energy barrier for programming, and the top p + region is adjacent to the inter-polysilicon dielectric and has a high energy barrier, the charge leakage of the control gate is expected to be inhibited, which aids in improving the programming efficiency and elevating the programming threshold voltage (V th ) of the np-type floating gate as investigated below. Furthermore, the charge simulations given in Figure 3d-f confirm the aforementioned assumption regarding charging in the three different floating gates. When the voltage applied to the control gate (V g ) is switched from 20 V (the programming state) to 0 V (the retention state), as expected, there is no change in the amount or distribution of charge in the conventional floating gate (the control split, Figure 3d), since the n-type floating gate is simply composed of a single material (n + polysilicon). However, the charge is obviously redistributed, and a part of the charge is lost in the pn-type and np-type floating gates when the gate voltage V g is switched. Clearly, at V g = 20 V, the charge in the n + or p + region of the np-type floating gate is larger than that of the pn-type floating gate. At V g = 0 V, in addition to the fact that more charge in the n + region of the np-type floating gate is retained, a portion of charge in the p + region is retained as well, suggesting that this n-p junction design in the floating gate will benefit the retention of charge, particularly because the p + region is much farther away from the tunneling oxide, making it less likely that a charge leakage will occur. Threshold Voltage and Erasing Voltage Two other important factors dominating the programming (writing) window and performance of memory devices include the threshold voltage (V th , the gate voltage required to create strong inversion under the gate when the floating gate contains the electrons [35]) and the erasing voltage (V er , the voltage required for removing the stored charge (electrons) in the floating gates [36]). When the gate voltage is below the threshold voltage, this device is no longer in strong inversion. This region of device operation is called the "cutoff", which corresponds to a logical "0" stored in the cell [37]. A higher threshold voltage yields a wider programming window and thereafter benefits more precise control over the read operation state of the devices. For example, two states with programming threshold voltages of 4 V and 2 V define a memory window, ∆V, of 2 V, which is clearly better than a window of 1.5 V attained in the case where the programming threshold voltages of the 0 and 1 states are, respectively, 3 V and 1.5 V. On the other hand, a higher erasing voltage is conducive for a more stable state and more effective retention of the stored charge in the memory devices. However, a higher programming threshold voltage may also cause a more serious impact on the tunneling oxide and induce larger current leakage to lower the erasing voltage. As mentioned above and presented below in Figure 4, the new types of floating gates, in particular the np-type, are observed to effectively improve the performance of the memory devices fabricated without the application of any new materials or changes to their physical structure. As clearly seen in the cumulative probability plot and box plot, the programming threshold voltage of the np-type floating gate was as high as about 1.2 times that of the conventional one (the control split) and much higher than that of the pn-type one (Figure 4a,b), while the erasing voltage of the np-type floating gate was close to that Materials 2022, 15, 3640 7 of 10 of the conventional one and also higher than that of the pn-type one, both suggesting the better performance of the np-type floating gate in controlling the operation of the memory devices (Figure 4c,d). The erasing voltages of the floating gates that we showed in Figure 4 were actually measured with a deliberately designed test key to check the floating gate state after the charges of the floating gate were cleaned up by applying a high voltage on the substrate. The lower the |V er |, the easier it is for the cell to be turned on, which typically corresponds to a logical "1" stored in the cell. The degradations of the programming and erasing operation (after 3000 cycles) were also investigated to understand the performance of the different types of floating gates, as given in Figure 5. It was clear that the np-type floating gate showed a much better performance than the pn-type one and had the same performance as the control split, indicating no extra current leakage from the tunneling oxide even at a higher threshold voltage. the programming threshold voltage of the np-type floating gate was as high as about 1.2 times that of the conventional one (the control split) and much higher than that of the pntype one (Figure 4a,b), while the erasing voltage of the np-type floating gate was close to that of the conventional one and also higher than that of the pn-type one, both suggesting the better performance of the np-type floating gate in controlling the operation of the memory devices (Figure 4c,d). The erasing voltages of the floating gates that we showed in Figure 4 were actually measured with a deliberately designed test key to check the floating gate state after the charges of the floating gate were cleaned up by applying a high voltage on the substrate. The lower the |Ver|, the easier it is for the cell to be turned on, which typically corresponds to a logical "1" stored in the cell. The degradations of the programming and erasing operation (after 3000 cycles) were also investigated to understand the performance of the different types of floating gates, as given in Figure 5. It was clear that the np-type floating gate showed a much better performance than the pn-type one and had the same performance as the control split, indicating no extra current leakage from the tunneling oxide even at a higher threshold voltage. Conclusions In summary, a new np-type floating gate with n-p junction polysilicon (bottom "n + " followed by top "p + " with a thickness ratio of 1:3) was developed in this study to reduce the charge leakage and improve the operation performance of memory devices. A depletion zone of carriers was formed at the n-p junction interface, leading to a narrowed channel and thus an increased sheet resistance that was 2.1 times that of the conventional floating gate. The relatively high charge storage and retention in the np-type floating gate is able to inhibit the charge leakage, owing to the high energy barrier at the n-p junction interface. Moreover, the programming threshold voltage difference between the 0 and 1 states (i.e., the memory window) of the np-type floating gate was effectively elevated by 1.2 times, while the erasing voltage and its degradation were close to that of the conventional one, indicative of no extra current leakage even at a higher programming threshold voltage and the better operation performance of the memory devices. Conclusions In summary, a new np-type floating gate with n-p junction polysilicon (bottom "n + " followed by top "p + " with a thickness ratio of 1:3) was developed in this study to reduce the charge leakage and improve the operation performance of memory devices. A depletion zone of carriers was formed at the n-p junction interface, leading to a narrowed channel and thus an increased sheet resistance that was 2.1 times that of the conventional floating gate. The relatively high charge storage and retention in the np-type floating gate is able to inhibit the charge leakage, owing to the high energy barrier at the n-p junction interface. Moreover, the programming threshold voltage difference between the 0 and 1 states (i.e., the memory window) of the np-type floating gate was effectively elevated by 1.2 times, while the erasing voltage and its degradation were close to that of the conventional one, indicative of no extra current leakage even at a higher programming threshold voltage and the better operation performance of the memory devices. Data Availability Statement: The data presented in this study are available on request from corresponding author.
2022-05-22T15:07:44.993Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "67735670ec40ecce07f840ef222eef0b7cbe576c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/10/3640/pdf?version=1652967037", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae4eb23971a447ed8e6c3059c6b383c2ed36fddb", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
249378762
pes2o/s2orc
v3-fos-license
Maternal Care and Pregnancy Outcomes of Venezuelan and Colombian Refugees Background Ecuador is a major host country for Colombians fleeing violence and Venezuelans escaping a complex humanitarian crisis, many of whom are pregnant women. Methods We used national birth registry data (2018–2020) to compare the maternal care and infant outcomes of Venezuelan and Colombian immigrants with Ecuadorian nationals. Results Venezuelan immigrants had a lower adjusted odds (AOR) for adequate prenatal care (AOR = 0.64;95%CI = 0.62,0.67) but a higher AOR for institutional (AOR = 2.68;95%CI = 1.84,3.93) and C-section delivery (AOR = 1.28;95%CI = 1.23,1.32) and birthing infants who were moderate-late preterm (AOR = 1.12;95%CI = 1.05,1.20), very preterm (AOR = 1.20;95%CI = 1.04,1.40), extremely pre-term (AOR = 1.65;95%CI = 1.27,2.14), low birthweight (LBW) (AOR = 1.11;95%CI = 1.05,1.17), very LBW (AOR = 1.35;95%CI = 1.12,1.62), and extremely LBW (AOR = 1.71;95%CI = 1.36,2.16). Colombians had decreased AORs for adequate prenatal care (AOR = 0.82;95%CI = 0.78,0.87) but increased AORs for institutional (AOR = 2.03;95%CI = 1.19,3.46) and C-section deliveries (AOR = 1.07;95%CI = 1.01,1.13) and birthing infants with moderate-late preterm (AOR = 1.17;95%CI = 1.05,1.30) but not LBW. Discussion The findings underscore the need to address the causes of adequate prenatal care, excess C-sections, and poorer infant outcomes among refugee and immigrant women, especially Venezuelans. exposure to stressors that negatively impact maternal-fetal health. Data Source and Participants We analyzed the three most recent years of national live birth registry records collected and maintained by the Ecuadorian National Institute of Statistics and Census (Spanish acronym: INEC) [22]. The birth registry data were recorded on a standard form that was completed by the medical professional (institutional births) or a civil registry official/other authorized personnel (home/other non-institutional births). The de-identified database included all live births registered in Ecuador between January 1, 2018-December 31, 2020 (n = 845,814). We sequentially excluded multiple gestations (n = 12,304) and cases missing data on infant number (n = 20,312), gestational age (n = 8,690), infant birthweight (n = 832), prenatal care (n = 433), and maternal age, ethnicity, and other sociodemographic variables (n = 896). We also excluded foreign nationals whose reported nationality was other than Colombian or Venezuelan (n = 3,866) and as well as records missing data on nationality (n = 360). This resulted in a total of 798,121 cases available for the analyses: Venezuelans (n = 22,619), Colombians (n = 7,638), and Ecuadorian nationals (n = 767,864). The United Nations High Commission on Refugees (UNHCR) defines refugees as, "people who have fled war, violence, conflict or persecution and have crossed an international border to find safety in another country" [23]. Based on circumstantial evidence from INEC migration statistics [24] and published reports from UNHCR and other refugee-serving organizations for the same time period suggests that the majority of the Venezuelan and Colombian immigrants whose birth were recorded in the INEC birth registry database were most likely refugees [2,3,8,9,25]. However, since the database did not provide information on the legal status of these women, in this paper, we refer them as "immigrants." Maternal Care and Infant Birth Outcomes The database contained information on the number but not the timing of prenatal visits. Thus, we used a modification of WHO antenatal care recommendations [26] to classify prenatal care adequacy: delivery at weeks 21-23, (≥ 2 contacts), weeks 24-26 (≥ 3 contacts), weeks 27-30 (≥ 4 contacts), weeks 31-34 (≥ 5 contacts), weeks 35-36 (≥ 6 contacts), weeks 37-38 (≥ 7 contacts), and weeks 39-42 (≥ 8 contacts). Other maternal care outcomes analyzed current complex humanitarian emergency date back more than a decade [8]. However, in recent years, the country's economic, political, and human rights situation deteriorated even further as inflation soared, violence and crime increased, and shortages of food, medicine, vaccines, and basic services (e.g., water, electricity, health care) became more common. This situation contributed to high rates of malnutrition, infectious and chronic disease morbidity, and maternal-infant mortality [2,[8][9][10]. The number of Venezuelan refugees living in Ecuador has been increasing since 2016 and in 2018, surpassed those from Colombia. Ecuador presently hosts 430,000 legally recognized Venezuelan refugees [3,8,9] but many thousands more undocumented persons also are believed to be in the country [9]. Refugees and other forcibly displaced persons can have poor health due to the multiple physical and mental hardships they experience prior to, during, and/or after migration such as harsh environmental conditions, hunger, infectious diseases, exacerbation of pre-existing health conditions, physical violence, and stress, among others [11][12][13][14]. These exposures, coupled with reduced access to maternal and other health care, can increase maternal-fetal risk for adverse outcomes [15][16][17]. Few studies have reported on the maternal care and birth outcomes of pregnant Venezuelan refugees living in South American host countries and none have done so for Colombian refugees. These limited findings suggest that Venezuelan refugees giving birth in either Brazil [18,19] or Colombia [20,21] tend to have limited prenatal care and often deliver by Cesarean-section (C-section) [18][19][20]. Two Colombian studies also reported that low birth weight was more prevalent among the infants of Venezuelan refugees than local country nationals but disagreed with respect to preterm births and low Apgar scores [20,21]. Examination of the specific maternal care challenges and infant birth outcomes of Venezuelan and Colombian immigrants, including those who are refugees, living in Latin American host countries and other immigrants has practical value for informing public health policy and interventions to improve maternalchild outcomes. We analyzed Ecuadorian live birth registry data (2018-2020) to compare the maternal care and infant outcomes of Colombian and Venezuelan immigrants, most of whom are most likely refugees, with those of Ecuadorian nationals. Our working hypothesis was both immigrant groups would have poorer access to prenatal care, institutional deliveries, and skilled birth attendants compared to Ecuadorians because of migration-related barriers. We also hypothesized that they would be more likely to give birth by C-section and deliver infants with low birthweight (LBW), preterm birth (PTB), and low Apgar scores due to migration-related through voluntary payments. The Guayaquil Welfare Board is a large non-profit non-governmental organization that operates several hospitals in Guayaquil, Ecuador's largest city. Hospital and clinics run by the national armed forces, national police service, and municipal hospitals and clinics constitute other types of public health care entities in Ecuador. The private for-profit health care institutions were comprised of privately operated hospital, clinics, and medical offices. The two non-institutional delivery sites reported in the database were home (i.e., home births) and other locations where women gave birth such as public roadways, parks, and commercial centers. Infant gestational age was classified as extremely preterm (< 28 wk), very preterm (28-32 wk), moderate-late preterm in the study included birth attendant (skilled, unskilled), delivery mode (vaginal, C-section), and delivery site type (institutional, non-institutional). Institutional deliveries were defined as those that occurred in a public, private, or non-governmental health care facility and non-institutional deliveries were those that took place in a home or other non-health care setting. The Ministry of Public Health operates the single largest public health care system in Ecuador. It provides universal health care through its extensive network of hospitals and regional health clinics. The Ecuadorian Social Security Institute operates the second largest public health care system in the country. IESS hospitals and clinics provide services for its members and their families through employee and employer payroll deductions or Table 1 Maternal characteristics compared by maternal country of permanent residence models were constructed to analyze the association of maternal nationality with maternal care categorical indicators including prenatal care (adequate, inadequate), delivery site (institutional, non-institutional), delivery mode (vaginal, C-section), and birth attendant (skilled, unskilled). The bivariate regression models produced unadjusted (OR) and adjusted odds ratio (AOR) estimates with their 95% confidence intervals. Multinomial logistic regression was used to analyze the association of maternal nationality with infant categorical outcomes: gestation length (extremely pre-term, very pre-term, moderate-late pre-term, term) and birthweight (extremely low birthweight, very low birthweight, low birthweight, average birthweight). Bivariate logistic regression analysis was used for the 1-and 5-minute Apgar scores (low, not low). The logistic regression models produced unadjusted (OR) and adjusted odds ratio (AOR) estimates with their 95% confidence intervals. (32-36 wk), and term birth (≥ 37 wk). Birthweight was categorized as extremely low birthweight (< 1000 gm), very low birthweight (1000-1499 gm), low birthweight (1500-2499 gm), and average birthweight (2500-3999 gm). One-and 5-minute Apgar scores were categorized as low (< 7) or not low (≥ 7). Data Analysis Summary data are presented as number (%) or mean ± SD. Our initial analyses compared maternal nationality with maternal care and infant birth outcome variables using X 2 or ANOVA, as appropriate. We used GLM to analyze the unadjusted and adjusted associations of maternal nationality (Venezuelan immigrant, Colombian immigrant, Ecuadorian national) with the average number of prenatal visits. The adjusted model included maternal age, ethnicity, education, marital status, urbanicity, parity, gestational age, birth year, and province region. Bivariate logistic regression 3 Birth attended by medical physician, nurse midwife (obstetriz), or nurse OR = Odds ratio; * p = 0.0001, # p = 0.009, ^ p = 0.02 All models adjusted for maternal age, ethnicity, marital status, education, urbanicity, parity, birth year, province region of residence; C-section model also adjusted for prenatal care adequacy, gestational age, delivery site The maternal care and infant birth outcome models adjusted for conceptually relevant covariates including maternal age, ethnicity, education, marital status, parity, and urbanicity. In addition to these variables, the delivery mode model also adjusted for gestational age, prenatal care adequacy and delivery site while the Apgar score models also included gestational age. Furthermore, all of the multivariate models also adjusted for birth year and province region. The publicly available, de-identified live-birth registry database published by INEC was classified, as "Research not subject to human subjects regulations," and was exempt from human subject review by the Indiana University Institutional Review Board (https://research.iu.edu/compliance/ human-subjects/review-levels/index.html). Ecuadorian reference group. For example, infants delivered by Venezuelans had unadjusted ORs for extremely preterm, very preterm, or moderate-to-late preterm birth that were 1.2 to 1.6 times higher than those of reference group women. After adding model covariates, the adjusted OR for extremely preterm birth increased only minimally while that of the other two preterm birth categories was slightly attenuated. Different from Venezuelan infants, those born to Colombian immigrant women exhibited a higher odds only for moderate-late preterm but not the other two preterm birth categories. The inclusion of covariates in the model minimally increased their unadjusted OR of 1.1 to 1.2 over that of the Ecuadorian reference group infants. Infants delivered by Venezuelan but not Colombian immigrants had higher adjusted odds for low birthweight, very low birthweight, and extremely low birthweight that were increased 1.3 to 1.7 times over that of Ecuadorian infants. Adjustment for covariates produced only relatively minor changes in the adjusted ORs for all three low birthweight categories ( Table 3). Finally, the infants of both immigrant groups had unadjusted ORs for low 1-minute and 5-minute Apgar scores that were increased by 1.3 times higher than those of infants born to the Ecuadorian reference group. The adjusted 1-minute and 5-minute ORs of infants born to Venezuelan immigrants showed only a small change after inclusion of the model covariates. The infants of Colombian immigrants showed a relatively small decrease in their adjusted OR for the 1-minute Apgar score different from their adjusted 5-minute Apgar score OR which was 1.7 times higher compared to Ecuadorian reference group infants. Discussion Ecuador is a major host country for Colombians fleeing violence and Venezuelans escaping a complex humanitarian crisis. Migration under these conditions is associated with significant challenges. Many experience physical hardships, emotional stress, physical violence, food insecurity, and other exposures and many have only limited or no access to health care, all of which can negatively impact their health and well-being [11-14, 27, 28]. Pregnant refuges and other immigrants are particularly vulnerable to the effects of migration-related stressor which can affect their access to maternal care services and negatively affect maternal-fetal outcomes [15][16][17]. To the best of knowledge, ours is the first study to examine these important issues in Venezuelan and Colombian immigrants in Ecuador, many whom the circumstantial evidence suggests are refugees [2,3,8,9,24,25]. One of the major findings was that although Venezuelan and Colombian Table 1 compares the characteristics of Venezuelan and Colombian women immigrants with the Ecuadorian reference group. As it indicates, the two immigrant groups differed from Ecuadorian nationals as well as from each other on maternal age, ethnicity, education, parity, and other sociodemographic and reproductive history characteristics. Results Venezuelan (adj. mean = 5.4; 95% C.I.= 5.3, 5.4 visits) and Colombian immigrants (adj. mean = 5.9; 95% C.I.=5.8, 5.9 visits) reported fewer prenatal visits, on average, than Ecuadorians (6.0; 95% C.I.=6.0,6.0 visits) even after adjustment for parity, education, and other covariates (p = 0.0001). As Table 2 shows, Venezuelan and Colombian immigrants had an unadjusted OR for adequate prenatal care that was respectively reduced by 30% and 17% compared to the reference group. The addition of covariates to the adjusted model further reduced the odds of adequate prenatal care by 36% among Venezuelans and 18% among Colombians. Figure 1 reports on the specific institutional health care and non-institutional sites where women delivered their pregnancies. 65% of all women in the study gave birth at a Ministry of Public Health facility with a significantly greater proportion of immigrant women doing so compared to the Ecuadorian reference group. In contrast, a greater proportion of Ecuadorians than immigrants delivered at facilities operated by private, for-profit entities, Guayaquil Welfare Board, Social Security Institute, or other public institutions. Relatively few women in the study gave birth at home or at another non-medical site (e.g., public roadway, park, commercial center). As Fig. 1 indicates, fewer than 60 of such births occurred among Venezuelan and Colombian immigrants compared to 2,500 recorded for Ecuadorians. The unadjusted odds of an institutional delivery by Venezuelan and Colombian immigrants was 2.0 and 1.3 times higher than that recorded for Ecuadorians (Table 2). After inclusion of covariates, the adjusted ORs for institutional delivery increased to 2.7 for Venezuelans and 2.0 for Colombians. As Table 2 also indicates, both the unadjusted and adjusted ORs for having a skilled birth attendant present at delivery were similar for both immigrant groups compared to Ecuadorians. Nearly half (47.4%) of all women delivered by C-section delivery. The unadjusted and adjusted ORs for C-section delivery among Venezuelans was around 25% higher than the Ecuadorian reference group. The unadjusted OR for delivering by C-section among Colombian immigrants was 11% lower than Ecuadorians. However, the addition of parity and other model covariates increased the odds of C-section birth by 9% compared to Ecuadorians. Table 3 shows that infants born to the immigrant women in the study, especially Venezuelans, had generally less favorable birth outcomes compared to those born to the Why the Venezuelan and Colombian immigrants in the study had less adequate prenatal care but better access to an institutional delivery than Ecuadorian nationals is unclear. One reason might be because both immigrant groups, particularly Venezuelans, are more likely to live in urban areas where most of the health care institutions in Ecuador are located and where urban transportation is easier to obtain to get to the hospital to deliver different from the situation in rural areas. It is possible that some of the Ecuadorians living in rural areas were unable to make it to the hospital on time. These geographic and other potential factors should be explored in future studies. The higher odds for a C-section delivery identified among the immigrant women in our study, especially Venezuelans, is worrying. Although this finding is consistent with that reported for refugees from Venezuela who delivered in Brazil [22], it differs from another indicating that C-section births among Venezuelan refugees was slightly reduced compared to Colombian nationals [23]. The reasons for the excess C-section risk identified among immigrant women in the study, Especially Venezuelans, is uncertain. The database did not indicate as to whether these were emergency vs. elective C-sections nor if they were primary or repeat surgeries. The higher odds of preterm birth, low birthweight, and low Apgar scores identified for the infants of one or both immigrant groups suggests that many had high-risk pregnancies which could have increased their risk for a C-section delivery. Our analyses controlled for factors such as parity, prenatal care adequacy, birth site, and several other variables linked to higher C-section risk but could not do so for any potential maternal, fetal, placental or amniotic fluid pathologies since no information on these was included in the database. In any case, the excess C-section deliveries identified in this study for Venezuelan and Colombians immigrants in Ecuador is an important issue requiring further investigation. When medically justified, a C-section delivery can effectively reduce the risk for maternal-infant mortality and morbidity but it is, nevertheless, a surgical intervention associated with both short-and longer-term health risks for both mother and infant [35]. The excess of low birthweight identified among those of Venezuelans in our study is consistent with a study of Venezuelan refugees living in Colombia that analyzed birth registry data for 2016-2018 [21] and another based on the same data source for a single year (2017) [20]. The excess in preterm births identified for babies born to Venezuelans in our study also is consistent with the aforementioned 2016-2018 Colombian birth registry study [21] but differs from the single year study reporting a lack of association [20]. The lower 1-and 5-minute Apgar score odds identified for infants born to Venezuelan immigrants concurs with those reported by the single year Colombian birth registry immigrants appeared have better or similar access to certain maternal care services institutional deliveries and skilled birth attendants compared to the Ecuadorian reference group, they were less likely to have adequate prenatal care. This is an important concern since adequate prenatal care is documented as one of the most cost-effective public health interventions for reducing the risk for adverse maternalperinatal outcomes [26]. Although the prenatal care finding is consistent with the limited data reported for Venezuelan refugees living in other Latin American host countries, i.e., Brazil and Colombia [18][19][20][21], we are unable to compare our findings on those from Colombia due to a lack of published studies. Refugees and other immigrants in Ecuador, regardless of their legal status or income, are eligible for free prenatal care and other health services through the Ministry of Public Health system [9,29]. There are several possible reasons why the immigrant women in the study had a lower odds of adequate prenatal care compared to Ecuadorian nationals. One possibility is that some could have spent part or most of their pregnancy in their own home country or on the road prior to arriving in Ecuador. This could have impeded their prenatal care access but once in Ecuador, they were able to access labor and delivery services, primarily through the Ministry of Public Health system. It is also possible that some of the same barriers reported in a recent survey of Venezuelan refugees in Ecuador [9] also could have influenced prenatal care access. The two most common frequent health care barriers identified by Venezuelan refugees were difficulties in obtaining an appointment (40%) and excessive distance/lack of transportation to get to the appointment site (15%) rather than a lack of funds (11%) or legal documentation issues (2%) [9]. The same survey also reported that many refugees seemed to know their constitutional rights to medical attention or options for accessing health care services. Other reports have suggested that medical appointment access has become more complicated in recent years due to an economic downturn in Ecuador coupled with the massive influx of Venezuelan refugees which severely taxed the ability of the public health system to provide routine services [30][31][32]. Further austerity measures adopted in the months prior to the covid-19 epidemic and the epidemic itself are reported to have negatively impacted health care services including population access to maternal care in Ecuador [33,34]. Data from the present INEC birth registry database supports this notion since the prevalence of adequate prenatal care among the study women decreased from 2018 to 2020 with the largest declines occurring in Venezuelan immigrants (-31%), followed by Colombian immigrants (-17%), and Ecuadorian nationals (-15%). relevant infant birth outcome indicators in Colombian and Venezuelan refugees and immigrants living in a Latin American host country, in this case, Ecuador. Conclusions Our analysis of live birth registry data identified disparities in the prenatal care and infant outcomes of Colombian and Venezuelan immigrants compared to Ecuadorian nationals. Pregnant immigrants who delivered a liveborn singleton infant in Ecuador during 2018-2020 had fewer average prenatal visits and were less likely to have adequate prenatal care than Ecuadorian nationals although they had good access to institutional deliveries, particularly through the MSP public health system. Immigrant women were also more likely than Ecuadorians to give birth by C-section, deliver infants classified as moderate-late preterm, and had low 1-and 5-minute Apgar scores. Moreover, Venezuelan immigrants also delivered babies with more severe forms of prematurity as well as low birthweight. Future cohort design studies should be conducted to confirm our study findings. Mixed-methods investigations would also be useful to help better understand the specific challenges and concerns of refugee and other immigrant women as well as to identify the structural and other barriers that may contribute to their poorer prenatal care and infant outcomes. In turn, this can be used to inform public health policy and develop more effective interventions to improve maternal-child outcomes. * p = 0.0001; ** linear association p = 0.04. study [20] but not that of the multiyear study [21]. The generally poorer birth outcomes identified for infants born to the immigrant mothers in our study has clinical and public health importance since those born too early or too small are at-risk for perinatal morbidity and mortality, postnatal growth and development perturbations, and future development of obesity, cardiometabolic and other chronic diseases, and premature mortality [36][37][38][39]. The limited information contained in the Ecuadorian live birth registry database used for the analyses did not permit the identification of proximate risk factors associated with preterm birth, low-birthweight, and low Apgar scores. Examples include undertreated pre-existing chronic and gestational conditions, specific stressors such as food insecurity, undernutrition, infectious diseases, intimate partner and other forms of violence that have been reported to impact Colombian [13,[40][41][42] and Venezuelan refugees living in Ecuador [9,[30][31][32]. Other reports indicate that refugees and other immigrants from these two countries often experience persistent social, economic, and political exclusion and discrimination, a situation that is reported to have worsened with the large recent influx of the latter into Ecuador [4,9,[30][31][32]. Other reported sources of stress for these includes difficulties in obtaining formal paid employment, adequate housing, or schooling for children and other challenges caused by the Covid-19 epidemic [5,9,12,32]. This is important since fetal exposure to maternal stress hormones, poor nutrition, and other environmental stressors can adversely affect fetal growth, development, and other fetal programming [43][44][45]. This study has some limitations. One is that its crosssectional design allows for inference but not establishment of causal effects. Its findings are applicable only to women delivering live-born singleton infants. Thus, the prevalence of preterm birth, low birthweight, and/or low Apgar scores could be underestimated because of potential selection bias. In addition, the birth registry database did not contain data on stillbirth, postnatal mortality, and maternal mortality, all of which are important indicators for assessing maternal-infant health. The database specified the nationality of women but for foreign nationals, it did not specify their legal status, length of residence in Ecuador, nor reasons for migration. However, indirect evidence from INEC government migration statistics, and the data and reports published by the UNHCR and other refugee-serving organizations strongly suggests that most were either documented or undocumented refugees. Another limitation was that we were unable to adjust for certain conceptually relevant maternal, fetal, and other covariates in some models because they were not contained in the database. However, despite its limitations, our study contributes to the limited data identifying disparities in maternal care and clinically
2022-06-06T13:39:50.151Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "26c9a77ebf7033adbf78e33afc2b46b33b925a3c", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10903-022-01370-4.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "26c9a77ebf7033adbf78e33afc2b46b33b925a3c", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
234266049
pes2o/s2orc
v3-fos-license
Improved Reliability of Voice over Internet Protocol(VoIP) using Machine Learning Start Voice over Internet Protocol (VoIP) is the communication means inspired with the revolutionizing wireless internet technology to transfer voice signals. However, VoIP over wireless means faces several challenges in terms of data loss resulting in poor voice quality or communication delay. To deal with the practical aspects of voice communication over Internet Protocol(IP) we proposed a highly secure wireless network. The proposed VoIP as a secure wireless network for VoIP combines the advantage of Artificial Bee Colony (ABC) based network optimization based on node property. This is followed by implementation of two classifiers; Support Vector Machine (SVM) that identifies the affected route and Convolutional Neural Network (CNN) that detects the malicious nodes present in the affected route to offer secure data transmission. The quality of voice communication is evaluated in terms of dropped packets, delay and jitter to offer interactive communication service. The simulation analysis over 50 nodes had proved the effectiveness in achieving reliable quality voice calling service with average throughput of 98.65% with comparatively lower jitter, packet loss and latency of 2.325 ms, 1.35% and 1.616 ms, respectively. Introduction VoIP can be understood as a voice calling process via data packets. In the recent decade, communication means have been revolutionized from Public Switched Telephone Network (PSTN) to internet based voice calling using VoIP. In VoIP communication the voice conversion data is sent using (Internet Protocol) IP packets over internet [1]. Skype calling has been a commercial application taking advantage of VoIP service has nearly 8 million users [2]. The rising popularity is due to the inherent advantages of VoIP services such as minimal voice calling cost. VoIP has proved to be cost effective means of communication however there exist some reliability issues adjoining this service like quality of voice communication which is highly challenged. VoIP components VoIP essentially consist of three parts, CODEC (COder/DECoder) for coding and decoding voice signal, packetizer and depacketizer that fragments and defragments the encoded signal and playout buffer at the receiver end that smoothen the delay [3] [4]. Overview of the VoIP system to offer end-toend voice communication is shown in Figure 1. Table 1 lists some of the most frequently implemented CODECs for VoIP services comparing their bit rate and frame size [5]. The security aspects of VoIP service majorly rely on the degree of security adjoining the network it has been using as means of IOP Publishing doi:10.1088/1757-899X/1020/1/012025 2 transmission. The transmission process first converts audio signals into IP packets and transfers through the network. During transfer instances of data loss and attack has been observed that have negative effect on the Quality-of-Service (QoS) offered by VoIP. Here, QoS is the major concern in reflecting customer satisfaction measured in terms of latency, dropped packets, jitter, echo and throughput. It has been observed that rising traffic in a network results in packet drop and even as small as 1% loss of packet could significantly damage VoIP service [6]. In the current study, attempt is made to offer a secure network for VoIP service using ABC based optimization followed by SVM and CNN to decrease the latency and packet drop. The paper is summarized in five sections while covering the introduction of VoIP service, popular CODECs implemented in the service, components necessary for the deployment of the service. Section 2 is dedicated to discuss the existing approaches that were implemented for the improvement of QoS related adjoining VoIP, section 3 describes the author proposed implementation to improve the reliability issues like QoS whose results and outcomes are discussed in section 4. Lastly, section 5 concludes the paper. Literature Review This part of paper summarizes the work done by various researchers. In this context researchers had discussed the security challenges faced by VoIP services. They analysed the existing approaches in terms of defence offered for best practice in addition to postulating guidelines for future researchers to secure network [7]. A security enhanced Session Initial Protocol (SIP) system bestowed was developed with protective firewall and proxy server that proved to be effective in defending against SIP flooding attacks [8]. The researchers offered Elliptic Curve Encryption approach to secure VoIP against the man in middle attack. In the process they also performed network modelling to address the DoS attacks. The simulation analysis proved the implemented strategy to successfully defend the network against SIP flooding and Denial of service attack [9]. Ad hoc On-Demand Distance Vector (AODV) routing algorithm is tested in network modelling for QoS parameters like Throughput [11]. The machine learning techniques are applied for the prediction of voice quality in distinct network environment [12]. An intelligent security system has been examined to protect from Distributed Denial-of-Service (DDoS) attack that mainly involves, firstly DDoS attack detection and secondly discriminator to detect individuals with affected areas. This presented approach has been designed over a simulated telephone network. The telephone service providers can change their circuit switches networks to packet switched by using the VoIP security model [13]. The researcher proposed a VoIP by using stream cipher technique and chaotic cryptography to produce secure key. The stream without encryption as well as stream on the basis of cipher and logistic key generator has been examined [14]. Another researcher proposed a secure VOIP model that worked on the basis of hidden Markov model against dynamic voice spammer. The malicious user has been detected based on the voice behaviour and the results have been computed in two scenarios that is one for heavy traffic and other for medium traffic. The TPR (True Positive Rate) for heavy and medium traffic of 95% and 92.5% has been attained [15]. The approach Convolutional Neural Network based Multi-domain Learning Scheme (CNN-MLS), has been used a classifier for arriving at a result based on features outcome from total subnets [16]. The researchers reviewed various research papers in the context of Artificial Bee Colony (ABC) algorithm and found useful to solve the real time engineering problems [17]. A reliable wireless network presented for VoIP based application and examined performance for routing overhead, average delay and average jitter [18]. Methodology This section discusses the proposed solution designed to offer a secure VoIP service. Literature reflects number of contributions made by researchers in this context, however none have been able to offer a desired level of network security required for VoIP service. Network Deployment The very step is to define a simulation area for experimentation and evaluating the performance of VoIP service. Therefore, a simulation area (A) of 1000 m 2 is defined in the proposed work. This is followed by the deployment of the network containing 'N' number of nodes with each node having a coverage limit of 25% of the total simulation area. Among the nodes a source node and a destination node are defined and route is created between the two for data transmission. The performance of deployed network for VoIP service are computed and improved using ABC fitness function for optimizing the network based on route node property. ABC works by evaluating nodes properties based on the packet dropped, latency and inter-node distance. Two machine-learning classifiers namely, SVM and CNN follow this step. In the present work, SVM acts to identify affected route based on the node properties and this information is fed to CNN that further identifies the affected node present in the identified affected route. The performance of the route to offer secure VoIP service is computed in terms of QoS parameters. The flow diagram of proposed design is shown in Figure 2. Optimized Node Property using ABC In response to the unsatisfactory performance of the network routes that were created for deployment of VoIP service few steps are followed to improve the network quality. The very first step is the evaluation of the nodes based on the inherit properties. ABC works by identifying an optimal threshold values based on the fitness function to offer a satisfactory outcome. All the nodes exhibit distinguished qualities or properties, namely, location, energy consumption, latency and packet drop during data transmission that significantly challenges the overall network performance. In the present work, ABC fitness function is used to distinguish nodes into two classes as normal and abnormal nodes based on node property. The steps involved in the process are given in Algorithm 1. Algorithm 1: ABC for defining node property 1. Input: → Number of nodes (bee population) deployed in 1000m 2 area 2. Output: // group of abnormal and normal nodes 3. Initialize parameters Calculate Length of 4. = ℎ( ) //represents size of nodes 5. Locate random nodes 6. = min( +1 − ) // finds distance between current and the next node. 9. Calculate fitness score based on all node properties 10. ℎ = ∑ =1 // calculate threshold value for nodes based on node property (latency, delay, energy consumption and distance) 11. Calculated the fitness function 12. , → // node with high latency, energy consumption and smaller node distance is assigned as affected node 15. Select the one with least fitness value 16. : // nodes distinguished as affected and normal nodes The above algorithm is called to divide the 'N' number of nodes into two groups, namely normal nodes and the abnormal nodes based on the node properties exhibited by them during data transmission. ABC analyze the nodes in the created route in terms of their inter-node distance that is expected to be larger for better network coverage. In addition to this, namely latency, energy consumption and packet drop are also analyzed that are expected to be of smaller magnitude in order to offer better network performance. Based on this information, ABC fitness function is called to divide all the nodes into two groups as that contains nodes demonstrating the properties of normal nodes to offer better network data transmission and that contains nodes that exhibited property that challenges the quality of data transmission. This information is further forwarded to computational intelligence techniques. Identification of Affected Route using SVM In the last step, ABC objective function was used to distinguish nodes based on their properties during data transmission from source node to destination node. In the current step, the route is evaluated using SVM based on the information passed by the ABC regarding . In the present work, radial basis function (RBF) kennel is used that is considered one of the popular kernelled training algorithms. The steps involved in identifying the route property is given in Algorithm 2. Algorithm 2: SVM for identifying affected route 1. Input: → // Categories data of optimized nodes based on 2. Output: and // affected route and normal route 3. Initialize SVM Set parameters 4. ( , ) // identifies trusted route based on property of group of nodes involved 8. == // if the route node group property gets validated 9. , → // affected route The above algorithm uses the categories data as training data that was based on the . This information along with SVM kernel function is used to train SVM that creates a training database of route node properties . The trained SVM now proved to successfully predict the affected route among the deployed routes. The information of the affected route is further used in the next step. Malicious node prediction using CNN The current sub-section discusses the implementation of CNN for predicting the affected node within the affected route identified by SVM. CNN architecture is more complicated and considered as a multilayer perceptron. It is one of the classes of deep neural networks that has pooling layer that significantly reduces the dimensionality of the features. It consists of convolutional layers as input layer, pooling layers also known as sub-sampling layers and fully connected layers that are the last few layers in the architecture to share the output. CNN training is considered to be computationally expensive however; overall it speeds up the computation by reducing feature size while reducing the processing time. It is initialized using parameters such as number of epoch, neurons, cross entropy, gradient, and mutations in addition to technique as Scale Conjugate. The steps involved in the prediction of affected node are given in Algorithm 3. The above algorithm uses the information of affected route passed by SVM classifier. Based on this information, convolutional layer trains the CNN structure to evaluate the nodes present within the affected route. In case the node property get validated, the particular node is marked as normal node and considered to be secure for data transmission otherwise, the node is marked as malicious node .The above discussed steps are designed to achieve a reliable and secure VoIP based communication. The knowledge based selection of ideal and secure nodes no doubt improved the communication system. Hence, using this hybrid approach speeds up the simulation and results in a faster communication system desired for VoIP services. Results and Discussion The performance of the proposed improved communication process is evaluated in terms of network reliability. VoIP transmission is evaluated under two scenarios, firstly under normal network without attack and secondly VoIP communication under attack. In the following sections QoS is measured in terms of parametric values of Throughput, Packet loss rate, jitter and latency observed for VoIP service under both scenarios. Table 2 lists the Throughput observed before and after the attack with variation in the number of nodes from 5 to 50. The values are plotted in Figure 3 for graphical analysis with node count plotted on Xaxis against the throughput on Y-axis. It is observed that an average throughput of 96.88% is achieved using ABC with SVM in comparison to 97.7% by ABC with CNN in comparison to 98.65% by the proposed design before attack. The same get lowered to 95.62%, 97.26% and 98.46% due to network IOP Publishing doi:10.1088/1757-899X/1020/1/012025 7 attack. The enhanced throughput by the proposed design is due to involvement both SVM and CNN by offering a secure route discovery. Packet Loss Comparison In VoIP packets are broadcasted over the network and any loss in the packets may significantly deteriorate the voice quality. The packet loss observed before and after attack are computed and listed in Table 3. The node counts for the same varies between 5 and 50. The proposed approach has shown relatively lesser packet loss as compared to ABC with SVM. Packet loss of the proposed in comparison to ABC with SVM and ABC with CNN is shown in Figure 4. It is observed that proposed approach exhibited lower average packet loss of 2.0% and 2.37% are found using ABC with SVM, 2.3% and 2.7% using ABC with CNN are found before and after attack as compared to the proposed work using ABC with SVM and CNN that exhibited an average packet loss of 1.35% and 1.53% before and after attack, respectively. Overall, lower packet loss has been observed using proposed approach. Jitter Comparison (ms) Jitter represents the variations observed in the delay of the packets in milliseconds (ms). The jitter observed before and after attack has been summarized in Table 4 for both ABC with SVM and proposed work. Overall lower jitter has been demonstrated by proposed approach. Jitter comparison in Figure 5 shows that before attack proposed approach demonstrated a lower average jitter of 2.325 ms as compared to 3.35 ms by ABC with SVM and 3.15 ms by ABC with CNN. The jitter for all the cases increased to 3.973 ms (ABC with SVM), 3.735 ms (ABC with CNN) and 2.66 ms (proposed) as a result of network attack. However, jitter for the proposed work remains lower in comparison to the other combinations. Latency Comparison (ms) Latency is compared to identify the overall delay in the packet delivery over the network. Latency values of the proposed and the ABC with SVM along with ABC with CNN are listed in Table 5 before and following the attack. The tabulated latency values show that overall lesser average delay has been observed using proposed approach. Latency of the three combinations is plotted in Figure 6 for both cases before and after the network attack. An average lower latency of 1.616 ms has been observed using proposed approach in comparison to 3.166 ms and 2.533 ms using ABC with SVM and ABC with CNN, respectively. The latency values get increased to 3.75 ms, 3.001 ms and 1.846 ms following the network attack in case of ABC with SVM, ABC with CNN and the proposed work, respectively. The overall lower latency was demonstrated by the proposed approach. The performance analysis has shown that the proposed work involving three fold strategies, namely optimization of node properties using ABC followed by identification of attacked route and malicious node in the attacked route proved the effectiveness of the design in achieving a reliable and secure VoIP service. Conclusion A reliable and secure wireless network for VoIP network has been designed in MATLAB simulator. To show the efficiency of the proposed work, the results have been computed in the presence and in the absence of attack. The ABC based optimization is performed to optimize nodes based on node properties. This information was used by SVM classifier to detect affected route followed by CNN to identify the malicious node present in the affected node. The efficiency of the designed approach has been evaluated over 50 nodes in terms of parametric values of throughput, packet loss, latency and jitter to have a reliable and secure VoIP service. The network performance demonstrated the higher average throughput of 98.65% and 98.46% before and after attack with overall lower packet loss, jitter and latency of 1.35%, 2.325 ms, and 1.616 ms before attack, 1.539%, 2.66 ms and 1.846 ms after
2021-05-11T00:03:49.434Z
2021-01-16T00:00:00.000
{ "year": 2021, "sha1": "569d5527a2b2735f8f40e8fca91abb6b65aee807", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1020/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "20e328b620d3f2554f02231373d0f4c34e5069fc", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
216444907
pes2o/s2orc
v3-fos-license
Effect of ozone treatment on microbiological and physicochemical properties of soymilk beverage Ozone is a strong oxidant and potent desinfecting agent. An ozonation process for microbial inactivation in some beverages has been done. Accordingly, the ozonation process was performed in soymilk. Soymilk was ozonated under different exposure time: 1, 3, 5, 10, and 15 minutes and evaluated for the changes of microbiological and physicochemical properties. The assays were carried out by using an experimental design fully randomized and results were statistically evaluated by Duncan’s Multiple Range test (p < 0.05). Following the ozone treatment, a significant reduction in pH occurred at 1 minute treatment, whereas a significant increasement was recorded after 3 minutes treatment. A significant increasement in total dissolve solid was observed at 5, 10, and 15 minutes. Microbial reduction was recorded 0.19 log cycles at 1 min., 0.35 log cycles at 3 min, and 0.57 log cycles at 5 minutes ozone treatment. In contrast, an increase of microbial population occurred at 10 and 15 minutes. The findings presented here could be a prelude for the potential application of ozone treatment of soymilk in the food industry. Introduction Soybean (Glycine max.) is a food material that known contain highly nutrition. Soybean contains 35-50% protein and essential amino acids [1]. Quality of protein that contained in soybean is similar to protein in milk. Besides its protein content, soybean is a good essential fatty acids and isoflavones source. Due to its high nutrition, soybean can reduce risk of coronary disease, breast and prostate cancer, also improve skin health [2]. These high nutrition brings popularity to soybean, so there are many food product that was derived, such as tofu and soymilk. Soymilk is a liquid based food product, made from extraction of soybean. This beverage has consumed for centuries in Asia. This beverage contain essential fatty acids and high quality protein with no cholesterol, gluten, and lactose. Soymilk is suitable as substitution of cow's milk for people with lactose-intolerant [3]. Soymilk was also used as base for other product, such as soy yoghurt and cheese [4]. Due to its short shelf-life, consumption of fresh soymilk has limit to areas close to the production site [4]. This characteristic leads to research on attempt to extend shelf-life of fresh soymilk. Thermal treatment has been used on preservation of soymilk. This method was used due to the effectivity on microbial inactivation [5]. However this method has its shortcoming. Thermal treatment could affect nutrition in food product and cause several change in organoleptic characteristics such as color and flavor. This changes could drop consumer's interest [6]. In attempt to preserved quality of soymilk, non-thermal treatment was used. There are several non-thermal treatment to preserved food product, such as UV-treatment, ultrasound, and ozone. Ozone (O3) is a strong oxidant and antimicrobial agent that has been applied in several fields, such as food industries, water treatment, and medicine. Its high reactivity, penetrability, and spontaneous decomposition to non-toxic product cause ozone was used to ensuring microbiological safety in food products. In food industries, oxidizing potential of ozone was applied for food preservation, extension of shelf-life, sterilization of used equipment [7]. Application of ozone has been reported in various beverages including apple cider [8], orange juice [9], blackberry juice [10] and raw milk [11]. Ozone is a powerful antimicrobial agent due to its oxidizing capacity, has numerous potential to be applied in food industry [12]. Its effectivity to reduce microorganism and free of chemical residues is the benefits of ozone. Mechanism of ozone as antimicrobial agent involve oxidation and destruction of cell walls and cytoplasmic membranes. Initially, ozone directly oxidized and destroyed bacteria's cell wall and cytoplasmic membranes. After that, ozone moves in the cell and work on its DNA so bacteria did not develop resistance over ozone. There are some different microbial sensitivity to ozone, depend on the structure of its cell walls [13]. Ozone can be generated in several ways, such as electrical discharge, electrochemical method, UV method, and radiochemical method [14]. Electrical discharge is a commonly used in application of ozone. This method was called corona discharge as well as dielectric barrier discharges (DBD). DBD method worked with plasma sourced from oxygen or air, caused electron energy transferred in dominant gas molecules such as N2, O2, H2O through collision process. Primary radical (O*, N*, OH* and others), positive ions, negative ions, and excited molecules were produced by this ion collision. One of the primary radical formation, electron-ion, ion-ion reaction and release of electrons toproduce more secondary radical such as O2* and H2O [15]. The object of this paper was to determine the effect of ozone treatment on microbiological and physicochemical properties of soymilk, through quantitatively evaluating total plate count (TPC), npH value, and total dissolved solid (TDS). Material The material used in this study is soymilk fresh obtained from Mr. Bowo's small and mediun interprise located at 18 Bharata alley, Tembalang, Semarang, Central Java. Sample preparation The soymilk samples was stored in plastic packaging, maintained at ambient conditions at a temperature of 23 ± 2 °C before treatment. Ozone treatment The ozone generator is connected with a device that functions to enlarge the surface area of the soymilk to form one layer, so that ozone is only exposed to the surface of soymilk. Before ozonation, the device was cleaned using sterile alcohol, rinsed with sterile distilled water, and ozone flowed with a concentration of 16 ppm for 2 minutes. A total of 200 mL of the sample was flowed and treated with ozonation with a controlled concentration up to 8 ppm. The device was operated at 2.7 kV then pure oxygen flowed at 0.6 L/min. The ozonation time is varied for 1, 3, and 5 minutes. Then, the samples wasanalyzed for microbial and psychocemical caracterization. Total plate count The working principle of TPC analysis is the calculation of the number of bacterial colonies present in the sample with dilutions as needed and done duplo. All work is done aseptically to prevent undesirable contamination and duplicate observation can improve accuracy. The number of bacterial colonies that can be calculated is between 30-300 colonies. 1 ml each soymilk sample was serially diluted with 0.90% sterile saline solution (5 to 6 dilutions), and appropriate dilutions of 0.1 mL was piped to the sterilized petri dish then PCA media was poured evenly. The total microbia was counted after 24 hours incubated at 30 °C. The log reduction was calculated considering the initial count (control sample) and the count's obtained after ozonation process. Physicochemical properties pH and total dissolve solid was measured using a digital pH/TDS meter Hanna HI 98130 to represent physichochemical properties of soymilk after ozonation. a beaker containg 10 mL soymilk sample wasanalyzed in triplicates and the pH meter was calibrated with standard solutions prior to carrying out each measurement [16]. Statistical analysis The analysis were replicated three times. SPSS version 16 statistical software (SPSS, Inc., United States) was used to analyze theresults. Means of the three replicates were compared using onewayanalysis of variance (ANOVA). The differences between DBD plasmatreatment means were evaluated using Duncan's test (ρ < 0.05). Total plate count The stability of soymilk depends upon the extent to which the microflora alters its freshness. Various fruits and beverages have different levels of background microorganisms. The cold plasma inactivation efficacy of microorganisms varies and depends on many factors such as microbial species, types of reactive species generated, duration of exposure, pH and the surrounding environment of the microorganisms [17]. The microbial inactivation in soymilk using ozone was presented in table 1. The total plate count for the untreated soymilk was 5.97 log CFU/ml. After ozone treatment from 1 to 15 mins, the total plate count were reduced, but after 10 and 15 mins ozonation were increased again. The ozone treatment after each 1, 3, and 5 min resulted in 0.19 log, 0.35, 0.57 log reductions. In contrast, an increase of microbial population occurred at 10 and 15 minutes each 0.03 and 0.87 log. This result is not conform with Indonesian National Standard for soymilk (SNI 01-3830-1995), the total plate count ≤ 2.3 log CFU/ml. The ozonation treatment can reduce the microbial count, but has not reached the expected number. This may beovercome by increasing hygiene when processing, concentrating, or completing proper fulfillment, can also be combined with storage in cold temperatures. These results are consistent with Khudhir's (2017) study, where ozonated milk samples can be returned microbes to 5.01 logs at room temperature storage (± 30 °C) and 5.89 log at cold temperature storage (± 4 ° C) [18].. The efficicacy of ozone treatment on soymilk sample can be determined by certainorganics, inorganics, or suspended solids. Dissolved organic matter reduces the disinfection activity by consuming ozone to produce compounds with little or no microbiocidal activity, there by reducing the concentration of active species available to react with microorganisms [19]. The inactivation of E. coli inorange juice, and found that the efficacy ofozonation was reduced in the presence of ascorbic acid and organic matter [20]. Meanwhile, our results showed an inactivation trend from 1 min to 5 min, and then log increasement after 10 and 15 min of ozone treatment. This could be explained that ozone concentration present or available in the medium was parameter determining ozone efficacy. Increased ozone concentration causes saturation and thus makes addition of further ozone to the reactor ineffective, resulting in longer times to achieve the same log-reduction values [19]. In addition, a longer ozonation time allowed the sample to come into contact with the air of many bacteria that precisely contaminated soymilk sample because the ozonation device is discontinous. pH There are some factors that affect ozonation in food product, such as ozone concentration, temperature, and pH [21]. pH value in a solution is important to determining capability of decontamination in a system [22]. Besides, pH is one of main quality parameter to determine product freshness [23]. pH value of treated sample was shown in table 2. Following treatment with ozonation, mostly pH value of treated sample increased slightly except for sample with 1 min. treatment time. Reduction in pH value for sample with 1 min. treatment time may happen due to reaction of reactive species, such as ozone, with water on water-gas interface [24]. The different pH result can be attributed to buffering capacity of the liquids [25]. Silva (2015) researched the effect of combining ozone and heat treatment on sugarcane juice [26]. Use of ozone in this research did not change the pH of sugarcane juice. Total dissolve solid Total dissolve solid is substances as inorganic salt and other organic substances that dissolved in water. Total dissolve solid includes anion and cation of a sample [27]. Total dissolve solid can be used as emulsion stability indicator [28]. Total dissolve solid of treated sample shown in table 2. This result shows that there were increasing of total dissolve solid on treated sample, except sample with 1 min, treatment time. These increasement can be caused by reaction of ozone with organic components of soymilk. Abhilasha (2018) stated ozone reacts with organic components as an electrophilic or nucleophilic agent and the reaction was with unsaturated compounds [29]. Conclusion After ozonation from 1 to 15 mins, the total plate count were reduced, but after 10 and 15 mins ozonation were increased. The microbial count was not conform with Indonesian National Standard for soymilk (SNI 01-3830-1995). The ozonation treatment can reduce the microbial count, but has not reached the expected number. The results show that most of pH value and total dissolve solid of treated sample increased slightly except for sample with 1 min. treatment time. Thus, the results highlight the ozonation as an alternative to reduce microbial load in soymilk. This may be overcome by increasing hygiene when ozonation processing, control the concentration, or treatment time, can also be combined with storage in cold temperatures. Further research on optimal ozonation conditions and the effects of ozonation on nutritional compounds are needed.
2020-04-02T09:31:37.763Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "65086967733539982b19e13cb84b7c37321b3412", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/443/1/012100", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8452f5d3792fde4a7e455d900c813d3d604d4a86", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
97833552
pes2o/s2orc
v3-fos-license
Intelligent structure design of membrane cathode assembly for direct methanol fuel cell The performance and the structural model of membrane electrode assembly (MEA) have been developed and experimentally verified with fundamental calculations of the direct methanol fuel cell (DMFC). The model provides information concerning the influence of the operating and structural parameters. The composition and performance optimization of MEA structure in DMFC has been investigated by including both electrochemical reaction and mass transport process. In the experimentation, the effect of Nafion content and loading method in the catalyst layer of cathode for DMFC was investigated. For the spray method electrode (SME), the cell performance and cathode performance using a dynamic hydrogen electrode (DHE) as a reference electrode was improved in comparison with those of the PME electrode by decreasing cathode potential. From ac impedance measurements of the cathode, the adsorption resistance of the SME electrode was decreased compared with that of the PME electrode. The higher cell performance was mostly dependent on the adsorption resistance. In the modelling, the cathode overpotential was decreased with increasing ionomer content, due to increasing ionic conductivity for proton transfer and the larger reaction site. The resistance to oxygen transport was increased at the same time, and became dominant at higher ionomer loadings, leading to an increase in the voltage loss. The ratio of ionomer to void space in the cathode affected the cathode polarization, which had the lowest resistance of oxygen diffusion at the ratio of 0.1–0.2. Copyright © 2005 John Wiley & Sons, Ltd. typically utilized as the anode catalyst, and Pt/C is utilized as the cathode catalyst. Increasing the reaction sites in the catalyst layer is important for improving the electrode performance (Shin et al., 2002;Thomas et al., 1999). Since the polymer membrane-like Nafion117 membrane used for the electrolyte is a solid phase, the membrane cannot deeply penetrate into the electrode as a liquid one does, therefore, the reaction area is limited to the contact surface between the electrode and membrane. Furthermore, the conductivity of the Nafion117 membrane is 10 S m À1 at 298 K (Okajima et al., 2001. The cell resistance using the Nafion117 membrane as described by Gottesfeld et al. was 0.17 O cm 2 , a lower contact resistance (Ren et al., 1996). For the purpose of improving the resistance and increasing the contact surface area, an ionomer like Nafion is added to the surface of the catalyst particle (Fujita and Tanigawa, 1985;Antolini et al., 1999). Here, the Nafion loading method can be classified into the following two methods (PME: paste method, SME: spray method). For the PME, the loading method is a general and well-known method in which the PME ink includes Nafion and the catalyst load on a carbon cloth. The oxygen can easily reach the reaction sites, but the catalyst utilization is low and the cell performance is also low. Therefore, we suggested the new SME loading method. In the SME, the catalyst ink with no Nafion is loaded on the carbon cloth. Nafion is then sprayed on the surface of the catalyst layer. The three phase boundary area is increased due to the better contacts. Meanwhile, the excess Nafion polymer hinders gas diffusion to the reaction sites. In a particular cathode, the excess of Nafion polymer on the catalyst surface is caused by flooding in the reaction sites (Furukawa et al., 2002(Furukawa et al., , 2004(Furukawa et al., , 2005. Furthermore, few studies in literature have focused on the catalyst layer performance and its composition optimization under various conditions. The present study in the PEFC follows earlier works of Bernardi and Verbrugge (1991). In the DMFC, the present study follows works of Scott et al. (1997) and Dohle et al. (2000). A steady state, isothermal and one-dimensional model of the catalyst layer is formulated by including both electrochemical kinetics and mass transport processes. The objective of this study is to clarify the effect of the Nafion content and loading methods (PME and SME) on the surface of the cathode catalyst layer for improving the cell performance and determining the optimal amount of the Nafion ionomer. By using the method of alternate current impedance spectroscopy (Muller et al., 1999;Diard et al., 2003), the variable resistances of an equivalent circuit for the MEA were determined. The results of this study highlight the existence of optimal operating and design parameters in the catalyst layer of cathode for DMFC, and will be useful in aiding the design of practical DMFC. EXPERIMENTAL The MEAs were prepared as follows: the anode consisted of a carbon cloth support (E-TEK, type A) on which was spread a thin 2.0 mg cm À2 layer loading of 53.1% Pt-Ru/C (Tanaka Precious Metals Co.) bound with a 20 wt% Nafion solution dissolved in butyl acetate (Uchida et al., 1998). The cathode was constructed using a method similar to the anode using a diffusion layer bound with 2.0 mg cm À2 of 47.0% Pt/C (Tanaka Precious Metals Co.) as the catalyst layer. A Nafion solution for the SME electrode was applied to the surface of the cathode catalyst layer that included no ionomer. Furthermore, the cathodes used in the experiments were also the asreceived ELAT electrode (E-TEK, ELAT/NC/DS/V3) and T-1 electrode (Sudoh et al., 2000) for comparative purposes. Details of the experimental conditions for cathode catalyst layer are shown in Table I. The electrodes were placed on either side of a pretreated Nafion117 membrane. This pretreatment involved boiling the membrane for 1 h in 3 vol% H 2 O 2 , 1 h in deionized (DI) water, 1 h in 0.5 kmol m À3 H 2 SO 4 and 1 h in deionized (DI) water, followed by washing in deionized (DI) water (Scott et al., 1997). The assembly was hot-pressed at 10 MPa for 2 min at 398 K. The resulting MEA was installed in the cell after pressing, and supplied with water on the anode side and 2.5 Â 10 À6 m 3 s À1 O 2 on the cathode side aged at 363 K for several hours. Investigations of the DMFC were performed using an experimental setup. The cell was fitted with an MEA sandwiched between two graphite blocks having serpentine channel flow paths cut out for the methanol and oxygen flows. The cell was held (6 Nm) together between two backing plates using a set of retaining bolts positioned around the cell. The anode flow-field through the vaporizer heated at 473 K was 2.0 kmol m À3 and the flow rate of the methanol solution was 5.0 Â 10 À8 m 3 s À1 . The outlet flows were controlled to impose the desired amount of back pressure at 0.1 MPa. Oxygen was supplied from a cylinder at ambient temperature. The flow rate was 5.9 Â 10 À6 m 3 s À1 . The cell temperature was maintained at 363 K. The outlet flows could be controlled to impose the desired amount of back pressure at 0.2 MPa. In addition to the usual cell hardware, our cells also contained a dynamic hydrogen electrode (DHE) as a reference electrode to resolve the anode and cathode performances in the DMFCs. The DHE electrode was a 0.5 mm diameter Pt-black coated Pt wire. The centre of the DHE was separated from the fuel cell electrode edges by 5 mm. The cell performance and electrochemical properties of the DMFC system were investigated using the electrochemical measuring system (Scribner Associates Inc., series 890B, Solartron SI 1250, Hokuto Denko HZ-3000 and Solartron SI 1287/SI 1260). The impedance spectra were usually obtained at frequencies between 65 kHz and 3 mHz (Muller et al., 1999). Figure 1 shows the effect of the MEA preparation on the cell performance at 363 K. By increasing the Nafion content of the cathode, the current density at a cell voltage of 0.4 V was increased to 258 mA cm À2 for the ratio of ionomer to void space of 0.1 (PME-0.1) when compared to 128 mA cm À2 for the ratio of ionomer to void space of 0.05 (PME-0.05). RESULTS AND DISCUSSION Meanwhile, an excess Nafion content decreased the cell performance and cell voltage in the short circuit current region. The limiting current density is ascribed to the high concentration overvoltage during cathode flooding to the cathode reaction sites. Figure 2 shows the effect of the MEA preparation in the DMFC impedance plots, and Figure 3 shows an equivalent circuit for the ac impedance analysis. Here, R 1 is the cell resistance including the electrolyte and membrane resistances. R 2 is the charge-transfer resistance at the interface of the catalyst layer. R 3 is the adsorption resistance of the oxygen dissociative chemisorption at the catalyst reaction site. C 2 is the capacitance of the double layer and C 3 is the capacitance of the adsorption (Kim et al., 2001;Ciureanu and Wang, 1999). Generally, the R 3 C 3 circuit is treated as the diffusion impedance. However, the diffusion impedance is completely analogous to the wave transmission in a finite length RC transmission line. Therefore, the equivalent circuit can be approximately expressed by an RC circuit. Furthermore, the charge-transfer reaction and the chemisorption reaction was described by Kamiya (2001). Here, the inductance L 1 and R 4 are characteristics of this equivalent circuit, and thus deserve an explanation with respect to its mechanistic significance. Inductive behaviour means that the current signal follows a voltage perturbation with a phase delay. The inductive behaviour can be explained using the kinetic theory for reactions involving intermediate adsorbates (Diard et al., 2003). In Figure 2, the measured values were well fitted using the parameters depicted in Figure 3. The diameter of the arc at low frequency decreased by increasing the Nafion content. Meanwhile, for the ratio of ionomer to void space of 0.46, the diameter became larger than that of the ratio of ionomer to void space of 0.1. Figure 4 shows the effect of the ratio of ionomer to void space on the internal resistances (R 2 and R 3 ) and capacitances (C 2 and C 3 ). The chargetransfer resistance (R 2 ) decreased more than that of the higher Nafion content and the double layer capacitance (C 2 ) became higher than that of the excess Nafion content. Since the dispersion of the Nafion polymer increased, the reaction site of the catalyst layers increased the area of the three-phase boundary. Meanwhile, the ratio of ionomer to void space was not proportional to the area of the reaction site because the excess ionomer volume increased ionomer thickness on the catalyst site. The adsorption resistance (R 3 ) and capacitance (C 3 ) also decreased. Meanwhile, the excess Nafion content increased R 3 and C 3 due to the increased Nafion polymer thickness at the reaction sites in the catalyst layers. The larger volume of the ionomer suppressed the oxygen chemisorption on the catalyst site. The adsorption resistance (R 3 ) shown in Figure 4 was reasonably correlated with the cell performance shown in Figure 1. This correlation clearly suggests that a low ionomer content in the catalyst layer is beneficial. The optimum value of the ratio of ionomer to void space was 0.1 (PME-0.1). Figure 5 shows the polarization curves for the SME electrode compared to the PME electrode. The best performances (921 mA cm À2 at the short circuit current) were obtained using the ELAT electrode with a thin catalyst layer. For the SME electrode, the cell performance was higher than that of the PME electrode because these PME and SME electrodes were different in catalyst layer thickness (Furukawa et al., 2005). The cell performance of the T-1 electrode was Figure 2. Effect of Nafion content on the DMFC impedance plots. inferior to the other electrodes because the T-1 electrode used for chlor-alkaline electrolysis did not include the polymer electrolyte such as Nafion ionomer in the catalyst layer. Ren et al. (1996) studied the cell performance based on the anode and cathode performances, with a DHE reference electrode. One of the problems of the previously reported DHE within the fuel cell structure was a serious potential drift when the cell was operated at a high-current density. This was explained by the changes in the water activity in the vicinity of the reference electrode, caused by the electro-osmotic drag of water. This problem was solved in our DHE by sufficiently separating the DHE from the edge of the fuel cell electrodes (4 mm as compared to the membrane thickness of about 0.2 mm). In the present study, the measurement of the cathode polarization behaviour using the DHE was employed to diagnose the polarization behaviours of the unit cell without modifications of the operating cell fixture and MEA. In Figure 5, the polarization curves of cathode using the PME and SME electrodes were confirmed to have significantly different curves. For the SME electrode, the cathode performance was higher than that of the PME electrode. Meanwhile, the polarization curves of the cathode using the PME and SME electrodes were confirmed to have voltage drops over a 600 mA cm À2 current density because of the higher mass transfer resistances. The limiting current density is ascribed to the high concentration overvoltage for the cathode flooding to the cathode reaction sites. Thus, the cell performance of the SME with the smallest interfacial resistance (Furukawa et al., 2005) was recorded to have a power density of 130 mW cm À2 . Modelling of DMFC The performance and the structural model of MEA has been developed and experimentally verified with fundamental calculations of the DMFC. The model provides information concerning the influence of the operating and structural parameters. The composition and performance optimization of MEA structure in DMFC has been investigated by including both electrochemical reaction and mass transport process. To evaluate cell performance and polarization performance, a mathematical model is developed to predict the cell voltage and current density of the fuel cell. Mass transport in the porous electrode and the concentration distribution in the electrode region were considered. Results of this paper highlight the existence of optimal operating and design parameters in the catalyst layer of the cathode for DMFC, and will be useful in aiding the design of practical DMFC. The spatial co-ordinate x is defined in Figure 6 so that the positive direction points from the cathode electrode to the membrane with its origin located at the interface between the cathode electrode and catalyst layer. All fluxes are taken as positive in the positive x-direction, while the ionic current density is opposite to the x-direction. The void space is usually sufficiently large compared to the ionomer volume. The catalyst layer is approximately one dimensional to the x-direction. In the catalyst layer, protons diffuse into the layer from the anode through the membrane on the left-hand side, and oxygen diffuses into the void space of the catalyst layer from the diffusion layer of the righthand side. Then, the diffused oxygen dissolves in the ionomer and the water layers. Electrons from the electrode travel through either the catalyzed carbon particles or platinum particles, depending on the type of catalyst used, to the catalyst surface. On the surface of the catalyst particles, the oxygen is consumed along with the protons and electrons, and the water vapour is produced along with the waste heat. The reaction in the cathode of the DMFC is as follows: The water vapour formed at the cathode surface transports through the cathode electrode or via back diffusion through the membrane to the anode, depending on the operating condition. Therefore, the ionomer layer surrounding the catalyst particles is taken to be fully hydrated, and the void region in the catalyst layer is assumed to be non-flood, as well as many other researchers have assumed open gas pores in their modelling of single cell performance. Meanwhile, Li's model of the catalyst layer was assumed to be fully flooded (Marr and Li, 1999). A model to evaluate the cathode polarization considers the variation in the concentrations of the above reaction species in the cathode, and the associated mass transfer processes. As seen in Figure 6, the diffusion layer of the cathode is comprised of a highly porous carbon cloth backing layer and a thin layer of uncatalyzed PTFE bound carbon. The layer of porous electrocatalyst, is attached on the Nafion membrane. The model for the cathode in the DMFC needs to account for changes in potential and the transfer of methanol from the anode to the cathode. Furthermore, methanol is assumed to diffuse through the membrane and reacts to carbon dioxide on the cathode catalyst. The cathode potential is calculated by using the overpotential and crossover effect. where E 0 cathode is the half-cell potential. The cathode overpotential is described by Tafel kinetics at the electrode, and a one-dimensional potential and concentration distribution is calculated within the thickness of the catalyst layer. The effect of methanol crossover, i.e. the crossover overpotential, Z xover ; is calculated from the flux of methanol through the membrane (Scott et al., 1997). The current distribution within the porous electrode was caused by considering poor mass transport (diffusion) and low protonic conductivity (Marr and Li, 1999;Yao et al., 2004). To illustrate the model, an oxygen consuming cathode is shown in Figure 6. In Figure 7, the volume of the void space occupied by ionomer varied from 5 to 50% of the void space for current densities from 200 to 600 mA cm À2 at the operating conditions specified in the previous section and Li's data (Marr and Li, 1999). The cathode overpotential was found to decrease as ionomer content increased, due to increase in ionic conductivity for proton transfer and the larger reaction site. Meanwhile, the resistance of oxygen transport increased at the partially flood condition, and became dominant at higher ionomer loadings, leading to an increase in the voltage loss. It was also clear that the overpotential increased significantly with the cathode current density, and the minimum voltage loss and the corresponding optimal ionomer loading depended on the cathode current density as well. The ratio of ionomer to void space affected the cathode polarization, which had the lowest resistance of oxygen diffusion at the ratio of 0.1-0.2. The optimization of the ratio of ionomer to void space was 0.1-0.2 in agreement with Li's data (Marr and Li, 1999). The cathode overpotential reported by Li increased in comparison with our results in Figure 7 (Marr and Li, 1999). The difference of the cathode potential was due to the flooding condition. Thus, the excess ionomer content contributed to increasing the cathode overpotential and the partial flooding in the catalyst layer. The optimization of the ratio of ionomer to void space was 0.1-0.2, which agreed with the experimental results (Figures 1 and 4). CONCLUSIONS The Nafion content of the cathode improved the current density at a cell voltage of 0.4 V up to 258 mA cm À2 for the ratio of ionomer to void space of 0.1 in comparison to the 128 mA cm À2 for the ratio of ionomer to void space of 0.05. The Nafion solution in the catalyst layers is available for high cell performance and lower interfacial resistance (R 1 and R 2 ). Based on the ac impedance measurement results, the increase in the Nafion loading decreased the arc diameter at low frequencies. The optimum value of the ionomer/void space ratio was 0.1. The higher cell performance depended on the adsorption resistance (R 3 ) of the oxygen dissociative chemisorption. The presence of a slight amount of ionomer on the catalyst surface is necessarily the key for the higher cell performance of the DMFC. For the SME electrode, the cell performance and cathode performance using a DHE reference electrode was higher than that of the PME electrode and the max power density was 130 mW cm À2 . The composition and performance optimization of MEA structure in DMFC has been investigated with both electrochemical reaction and mass transport process. The cathode overpotential decreased with the increasing ionomer content, due to the increasing ionic conductivity for proton transfer and the larger reaction site. The resistance to oxygen transport was increased at the same time, and became dominant at higher ionomer loadings, leading to an increase in the voltage loss. The overpotential was increased significantly with the cathode current density, and the minimum voltage loss and the corresponding optimal ionomer loading depended on the cathode current density as well. The ratio of ionomer to void space affected the cathode polarization, which had the lowest resistance of oxygen diffusion at the ratio of 0.1-0.2.
2019-04-06T00:42:57.014Z
2005-10-10T00:00:00.000
{ "year": 2005, "sha1": "fdd1aa1608b5d1cf2da84f874c7c0837b152ccbf", "oa_license": null, "oa_url": "https://doi.org/10.1002/er.1140", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "e63e22dae051a43dd621813f8b648242ae56385c", "s2fieldsofstudy": [ "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
250185766
pes2o/s2orc
v3-fos-license
Force Transmission in Disordered Fibre Networks Cells residing in living tissues apply forces to their immediate surroundings to promote the restructuration of the extracellular matrix fibres and to transmit mechanical signals to other cells. Here we use a minimalist model to study how these forces, applied locally by cell contraction, propagate through the fibrous network in the extracellular matrix. In particular, we characterize how the transmission of forces is influenced by the connectivity of the network and by the bending rigidity of the fibers. For highly connected fiber networks the stresses spread out isotropically around the cell over a distance that first increases with increasing contraction of the cell and then saturates at a characteristic length. For lower connectivity, however, the stress pattern is highly asymmetric and is characterised by force chains that can transmit stresses over very long distances. We hope that our analysis of force transmission in fibrous networks can provide a new avenue for future studies on how the mechanical feedback between the cell and the ECM is coupled with the microscopic environment around the cells. INTRODUCTION Living tissues are constituted by the extracellular matrix (ECM), a complex network of proteins and polysaccharides that gives structural support to surrounding cells. In animal tissues, the main component of the ECM is collagen, which forms a crosslinked network of stiff fibres that provides the ECM with its elasticity and mechanical strength (Mouw et al., 2014;Burla et al., 2019a). Cells are embedded within this network and are linked to the matrix by focal adhesion complexes (FAs), which act as physical anchors via which cells can mechanically interact with their environment (Totsukawa et al., 2004;Lecuit et al., 2011). Indeed, many cellular processes are regulated by mechanical feedback between cells and the ECM. Cells actively exert forces on the surrounding matrix, leading to structural reorganisations in the surrounding network, like fibre alignment, plastic rearrangements, and densification around the cell (Vader et al., 2009;Kim et al., 2017;Sopher et al., 2018;Goren et al., 2020). Cells also sense the mechanical properties of the surrounding medium. For example, cancer cells exhibit a preferential migration to regions with higher stiffness (durotaxis) (Lo et al., 2000;DuChez et al., 2019;Rens and Merks, 2020), and can adapt their shape as a function of the matrix stiffness (Koch et al., 2012). Likewise, wound healing requires contractile forces applied by myofibroblasts around the injured zone (Li and Wang, 2011). Cells also use mechanical signals to communicate with other cells; they actively exert forces on the surrounding matrix, transmitted through the ECM to distant cells (Reinhart-King et al., 2008;Winer et al., 2009;Han et al., 2018). This mechanical signalling is believed to play an essential role in tissue development, as well as in the development of cancer and other diseases (Bates et al., 2007;Hinz et al., 2012). Therefore, understanding how forces propagate in the extracellular matrix is relevant for obtaining fundamental knowledge about biological processes in both healthy and pathological tissue. Likewise, this insight can be revealing in tissue engineering, where tailoring the mechanical properties and cell-matrix interactions are crucial for the successful development of artificial tissues and organs (Chen et al., 2004;Causa et al., 2007;Wegst et al., 2015). The challenge in describing mechanical signal propagation through the ECM is that the ECM is a very heterogeneous fibre network, with a typical mesh size that is comparable to the size of the cell. This means that continuum theories cannot be used (Notbohm et al., 2015;Ronceray et al., 2016;Han et al., 2018). The heterogeneity of the fibre network is regulated by the network connectivity z and the bending rigidity of the fibres, which thereby influence the mechanical response of the ECM. It is well-known that networks with only central-force interactions become mechanically stable only when the connectivity exceeds a critical threshold known as the isostatic point, which has been shown by Maxwell to be equal to z c = 2d, with d the spatial dimensionality (Maxwell, 1864). However, the extracellular fibre networks surrounding cells have a lower connectivity, ranging from z = 3 for branched networks to z = 4 for cross-linked fibres. In particular, collagen networks exhibit an average connectivity 〈z〉 ≈ 3.4, making them sub-isostatic (Jansen et al., 2018). For such networks, the bending rigidity of the fibres κ emerges as an additional mechanism to induce network stability (Broedersz et al., 2011). The bending rigidity is related to the persistence length l p of the fibres as l p κ/(k B T), which describes the length scale of undulations of a polymer driven by thermal energy k B T. For collagen fibres the persistence length is typically much larger than the contour length of the fibres, which means that collagen fibres are stiff and entropic effects due to fluctuations can be neglected. The interplay between connectivity and fibre bending leads to a strongly nonlinear mechanical response to applied stresses (Licup et al., 2015;Sharma et al., 2016a;Jansen et al., 2018). At low strains, the network is soft with a response governed by fibre bending and non-affine network reorganisations. At higher strains, alignment of the fibres in the strain direction leads to fibre stretching, making the network much more rigid (Narmoneva et al., 1999;Vader et al., 2009;Broedersz et al., 2011;Licup et al., 2015). It has been shown that this nonlinearity has a pronounced impact on how forces propagate in the network (Baker and Chen, 2012;Jones et al., 2015;Han et al., 2018). To understand mechanical signalling between cells in the ECM, it is thus necessary to develop a model for force propagation that incorporates the disordered network structure and its mechanical nonlinearity. To do this, we employ a minimalist model based on two-dimensional triangular athermal networks, where the disorder is induced by controlling the connectivity. Such network models have been shown to give a very accurate description of the mechanics of collagen networks (Licup et al., 2015;Sharma et al., 2016a;Burla et al., 2020). To model an embedded cell, we incorporate a rigid circular body, which shrinks in area, generating local compression. We then examine how forces propagate from the contracting cell through the network, using concepts from network theory. Our findings reveal that the propagation in the case of high connectivity is isotropic and limited when the surrounding network around the cell is highly stressed. By contrast, asymmetry emerges at low connectivity, and the transmission achieves larger distances. The bending rigidity in this regime has a more pronounced role in controlling the force transmission. MODELING We perform numerical simulations on 2D diluted triangular networks of N × N nodes, with N = 100, and with spacing l 0 . Periodic boundary conditions are applied in all directions. We dilute the lattice by randomly removing bonds with probability 1 − p, and remove all dangling ends. This leads to an average network connectivity of 〈z〉 pz max , with z max = 6 for our triangular lattice. We model fibrous biopolymers such as collagen by considering stretching and bending rigidity. Thus, we consider every bond in the diluted network as a Hookean spring with stretching modulus μ, while sequences of contiguous colinear bonds have an associated bending rigidity κ. The Hamiltonian H H stretch + H bend that quantifies the network energy is expressed as where, in the first term, the sum runs over the bonded pairs 〈ij〉, l ij denotes the distance between the two nodes, and l ij,0 indicates the rest length. The second term accounts for the bending energy and takes the bonded triplets 〈ijk〉, with θ ijk the angle between the triplet and θ ijk,0 the rest angle, and l ijk,0 = (l ij,0 + l jk,0 )/2. We fix μ = 1 and l 0 = 1, and define a reduced bending rigidityκ κ/(μl 2 0 ) to specify the relative importance of bending stiffness compared to stretching stiffness. Since biological fibres are typically much softer with respect to bending than to stretching, we will consider only cases withκ ≪ 1. According to Maxwell's rigidity criterion, the isostatic point (i.e., the connectivity below which the rigidity of the network vanishes) for these networks in the absence of bending stiffness (so forκ 0) is equal to 0.65, while for non-zeroκ the rigidity threshold is lower, at p = 0.442 (Maxwell, 1864;Broedersz et al., 2011). It has been shown that for p > 0.65, the mechanical properties of these networks are dominated by stretching of the fibres, while for 0.45 < p < 0.65 non-affine fibre bending modes govern the mechanics (Ronceray et al., 2016;Broedersz et al., 2011), Here, we compare four different connectivities, namely p = 0.85, 0.75, 0.65, and 0.55, which translates into 〈z〉 5.1, 4.5, 3.9, and 3.3, respectively. In Figure 1A we show an example of the final network for p = 0.55. We choose these parameter values to explore a range that includes the connectivity of in-vitro reconstituted collagen networks, reported to be in the range 0.5 − 0.65 (Jansen et al., 2018;Burla et al., 2020). The normalized bending rigidity of collagen fibres has been reported to be on the order of 10 −4 , but it can reach higher values for strongly bundled fibres (Jansen et al., 2018;Burla et al., 2020). We therefore exploreκ 10 −4 , 10 −3 , and 10 −2 . We furthermore emphasise that the network model that we use here has been shown previously to accurately describe the mechanical properties and fracture of real collagen networks (Broedersz and MacKintosh, 2014;Burla et al., 2020;Tauber et al., 2022). We then introduce a circular rigid body with radius R 0 = 3l 0 that mimics an embedded cell in the center of the network, as we show in Figure 1B, and we place nodes on the intersection points between the network and the surface of the cell, while adjusting the corresponding equilibrium bond lengths, see Figure 1C. Nodes at the interior of the cell are removed. Finally, we induce an isotropic contraction of the cell body by applying affine deformation on nodes on the cell surface towards the cell center, as we schematically represent in Figure 1D. This local deformation is quantified by the strain ϵ −(R − R 0 )/R 0 , where R is the cell's radius after contraction. After each strain step, fixed to be Δϵ = 0.001, the network is equilibrated by a minimization of energy using the FIRE algorithm (Bitzek et al., 2006) on the remaining nodes of the network, and with a tolerance F RMS = 10 −8 . Hence, thermal fluctuations are ignored and the fibre network is modelled as an athermal elastic network. Previous work has shown that this is a good assumption for collagen networks Broedersz et al. (2011), Licup et al. (2015), Rens et al. (2016), Arzash et al. (2020), Burla et al. (2020). The different observables discussed below are averaged over 20 independent simulations for p = 0.85 and 0.75, and over 50 for p = 0.65 and 0.55, for everyκ. Local Deformation in the Network Cell contraction leads to mechanical stresses in the surrounding network. To investigate how these stresses propagate for different contractile strains, we identify the nodes in the network that have at least one stretched or compressed bond. Here we define a bond to be stretched or compressed when the corresponding force f ij is equal to or greater than a threshold f (th) , which we take to be the maximum localized force in the network when the energy exceeds the numerical error. Figure 2 shows snapshots of the stressed bonds in the network for four different connectivities, highlighting a dense, stressed region in the vicinity of the cell at high connectivity, which becomes more irregular in sparse networks with lower p. To investigate in more detail how the , blue) and stretched (f ij > f (th) , red) bonds forκ 10 −4 at ϵ = 0.50. Here, the black circle has a radius r*~10l 0 , corresponding to the boundary between the dense and diffuse region observed by computing ϕ(r). The radial stress in the inner region decays as σ rr (r) ∝ r −2 . Bond thickness is proportional to force magnitude. Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 931776 stress propagation depends on the cell contraction and network connectivity, we compute the fraction of nodes with at least one out-of-equilibrium bond as a function of the distance r to the cell centre, ϕ(r): where the brackets 〈 · 〉 indicate ensemble averaging, N d (r) is the number of nodes at distance r that has a stretched or compressed bond (as specified above), and N(r) is the total number of bonds at distance r. These results are reported in Figure 3A for different strains ϵ and the four connectivities studied here atκ 10 −4 . For the highest connectivity, p = 0.85, we observe a region close to the cell where all bonds carry stress (i.e., ϕ(r) ≈ 1), which extends over larger distances as the strain increases. For larger distances, ϕ(r) decays, indicating that the stress pattern becomes more diffuse far away from the cell. The boundary between the dense and diffuse region is also indicated in the snapshot in Figure 2A. When the network connectivity decreases to p = 0.75, the fully stressed region shrinks and at p ≤ 0.65 it disappears completely. Remarkably, however, ϕ(r) develops a long tail, which decays over longer distances as the connectivity is reduced. This indicates that, while the stress pattern is more diffuse in sparsely connected networks, the mechanical perturbation can be perceived over greater distances as p decreases ( Figure 3A). We also study how the bending rigidityκ of the fibres influences the force propagation. For p ≥ 0.75, we do not find any significant difference in the behaviour of ϕ(r) (not shown for clarity). This is expected, because these networks are above the isostatic point, where the mechanical response is completely governed by fibre stretching (Broedersz et al., 2011). However, significant changes are observed at lower p, where bending modes become important. In particular, for p = 0.65 we see that increasing the bending rigidity leads to a higher fraction of stressed bonds close to the cell, while the decay at larger distances becomes steeper. From these results, we can conclude that stresses tend to concentrate in a region around the contractile cell in rigid fibre networks, while for sparser, softer networks, the stresses branch out over a large but very diffuse area. We further characterize the propagation of forces generated by the cell by computing the local stress tensor at each bond connecting nodes i and j (Ronceray et al., 2016), defined as where f (ij) α is the α-component of the force supported by the bond between nodes i and j, and r (j) β is the β-component of the position of bond j. In particular, we compute the radial component σ rr and average this for each radial distance r, using circular bins of thickness Δr. In Figure 3B we show σ rr (r) (averaged over many independent network realizations) as a function ofκ for all connectivities discussed here, at a strain ϵ = 0.50. For p = 0.85, we find that σ rr (r) ∝ r −2 up to a distance r ≤ r*, which corresponds to the transition from the fully stressed inner region to the more diffuse outer region. This decay is consistent with the stress profile expected for continuous, linearly elastic media in two dimensions (Ronceray et al., 2016). For p = 0.75, the stress exhibits a more complex behaviour, emphasised by the presence of a slower decay as σ rr (r) ∝ r −1 in the vicinity of the cell, to later recover the linearly elastic decay σ rr (r) ∝ r −2 . As discussed previously (Ronceray et al., 2016), this cross-over is related to the non-linear response of the fibre network and, in particular, to collective buckling modes in the inner region, which prevents the network from sustaining compressive stresses in this region. We also see that for p ≥ 0.75, the bending rigidity does not influence the stress profile, in agreement with our observations for ϕ(r). However, for p ≤ 0.65, the bending rigidity does play an important role. Indeed, for p = 0.65 we find a transition from σ rr (r) ∝ r −2 at highκ (most rigid networks) to a slower decay σ rr (r) ∝ r −1 at lowκ (softest networks), while for p = 0.55 the linear elasticity decay disappears completely and the stress decays as r −1 for allκ. This slow decay is related to the formation of so-called force chains, linear chains of stretched bonds that radiate outward. As we will see below, compressive stresses are irrelevant in this regime, while the tension in the force chains leads to a radial stress that decreases proportionally to the local density of force chains, which goes as 1/r. Pattern of Force Transmission Next, we study the pattern of the forces in more detail. We treat compressed and stretched bonds separately, as shown in Supplementary Figures S1A,B. Likewise, we only consider nodes connected to the cell surface via other deformed bonds. For each resulting cluster of deformed bonds, we compute the gyration tensor as where r i,α is the α − coordinate of particle i and where the sum runs over all N G nodes that are part of the deformed cluster. We diagonalize the tensor obtaining the principal moments, λ 1 and λ 2 and from this we compute the radius of gyration R g λ 1 + λ 2 and the asphericity a (λ 1 −λ 2 ) 2 (λ 1 +λ 2 ) 2 , which takes values between 0 and 1 to indicate deviations from circular symmetry. The behaviour of R g and a are shown as a function of strain in Figures 4A,B, respectively, for different values ofκ, while Figure 4C shows corresponding snapshots for the stretched bonds. We first discuss the pattern of stretched bonds. For p ≥ 0.75 we find that R g increases continuously with ϵ, preserving the spherical symmetry, as indicated by the low value of a. The growth of R g with ϵ obviously is related to the growth of the stressed region seen in Figure 3A, and indicates how the stresses propagate further out as the cell contracts more. Again, the bending rigidity is unimportant in this stretching-dominated regime. When the connectivity is reduced to p = 0.65, we observe that R g becomes dependent onκ. In particular, for low κ the growth of R g as a function of strain becomes erratic, which is due to large buckling-type rearrangements of nodes. The pattern is also highly asymmetric in these cases, as indicated by the relatively large value of a. Whenκ increases and the network rigidity increases, the erratic behaviour of R g disappears and the pattern becomes more isotropic, similar to the patterns at higher connectivity. This is also illustrated by the snapshots in Figure 4C for p = 0.65 and two differentκ. For the lowest connectivity, p = 0.55 the pattern remains highly anisotropic for all values of the bending rigidity. For such diluted networks, we also observe large variations between different network configurations, so that the ensemble average shown in Figures 4A,B gives a somewhat distorted view. As shown in Supplementary Figure S2, for individual network realization R g grows erratically, with significant jumps that mark a sudden transition from floppy to rigid structures locally. This erratic behaviour also leads to very large differences between different network realizations for p = 0.55, in particular for low values ofκ, which highlights that force transmission is less robust and reliable in sparse networks than in denser networks. Repeating this analysis for the compressed bonds, we see that for all p values R g is significantly smaller than for the stretched bonds; for p = 0.55, R g even decreases with increasing strain, as the cell pulls the nodes inwards. Hence, compression forces do not propagate far away from the cell surface, especially for low p and the transmission of forces over long distances is dominated by stretched bonds. We note, finally, that our previous observation that forces can propagate over longer distances in more sparsely connected networks (Figure 3) does not lead to a larger radius of gyration of the force patterns. This is because the stress propagation in the more distant regions for low connectivity is governed by a small number of force chains, which contribute less to R g than the dense zone of stressed bonds at higher connectivity. Force Chains in the Network As is clear from the snapshots in Figures 2, 4C, the pattern of the local forces differs greatly between networks of high and low connectivity. While the forces radiate outwards more or less isotropically at high p, the pattern at lower p is characterized by so-called force chains, sequences of stretched bonds that can transmit forces over long distances in certain directions (Grill et al., 2021). To analyze the pattern of these force chains in more detail, we follow previous work that explored force transmission in granular systems using concepts of graph theory (Bassett et al., 2015;Newman, 2018). It is well-established that a granular material (i.e., a material consisting of jammed granular particles) can be mapped on an athermal network, with contact forces between neighbouring particles represented as bonds between nodes. The mechanical properties of such granular materials are characterized by discrete force chains that are very similar to the force chains observed in our simulations. Mechanical forces are transmitted along these force chains, while regions between the force chains are shielded from mechanical stresses (Tordesillas, 2007;Somfai et al., 2005;Owens and Daniels, 2011). To analyze the network of force chains, we start from the cluster of stretched bonds, as shown in Figure 4C, and we first identify the nodes at the end of each force chain, i.e., the nodes at the periphery of the cluster after which the force propagates no further (see also Supplementary Figure S1C). We then construct all simple paths, i.e., all non-repeating sequences of nodes (Newman, 2018), that connect each of the nodes at the periphery to the cell surface, obtaining thus a distribution of all force chains. In Figure 5 we show the distribution of the topological lengths of these force chains, for different strains and different connectivities. The top row shows only the shortest paths between each periphery node and the cell surface, as a measure for the typical distance over which the force propagates, while the bottom row shows the distribution of all simple paths. A difference between the distribution of shortest paths and all paths indicates the presence of many secondary paths due to crossconnections between force chains. Such cross-connections make the propagation of forces more robust, since the mechanical transmission does not rely on one single path, but multiple paths can transmit the force. For p = 0.85 ( Figure 5A) we see that at low strains, the distribution of path lengths decays monotonically, indicating that most of the force chains are short. However, when ϵ increases, the distribution acquires a clear optimum. This reveals that there is a characteristic length L p s,t over which forces propagate. This characteristic length is in agreement with our previous observation that the force pattern is rather isotropic for this connectivity (Figure 4), so that force chains reach the same distance in any direction. As the strain induced by the contracting cell increases, the characteristic length for force propagation increases, until at larger strains (ϵ → 0.5) it appears to saturate (see also Supplementary Figure S3A). The distribution for all simple paths follows a similar trend as that for the shortest paths, but the maximum in the distribution, L t ′ lies at larger lengths. With increasing strain the two distributions start to deviate more (see also Supplementary Figure S3A), indicating that there are many interconnected force paths, in agreement with the formation of a dense, fully stressed region near the cell shown in Figures 2, 3. For p = 0.75 ( Figure 5) we find a similar behaviour, although the peaks associated with the characteristic lengths L p s,t and L p t are wider, indicating that for lower connectivity there is a larger variation in the typical distance for force transmission. For p ≤ 0.65, the situation is completely different. As shown in Figures 5C,D, for both p = 0.65 and p = 0.55, the length distribution decays monotonically for all strains, and follows a power law decay n(L t ) ∝ L −α t with α ≈ 1 over a large range of lengths. Hence, there is no characteristic length of force propagation in these dilute networks. Most force chains are very short, but a small number of force chains can reach out far. Again, this is in agreement with the large apshericity and the anisotropic force patterns shown in Figure 4. For these low connectivities, we also find that the distribution of all simple paths is nearly the same as that of only the shortest paths (Supplementary Figure S3B), indicating that there are few interconnections between force chains, so that long-range force transmission relies on one or a few force chains only. We also explore the influence thatκ has on the force chains. As expected, for p ≥ 0.75 the bending rigidity does not modify the path length distributions, but for p = 0.65 an increase in bending rigidity promotes a characteristic distance, making the force chain network more similar to that for higher connectivities (see Supplementary Figure S3A). To analyze the morphology of the force chain network in more detail, we plot the topological length of each force chain in the network L t as a function of the Euclidean distance L E between the end of the force chain and the cell surface, see Figure 6 for p = 0.85 and 0.55, and Supplementary Figure S4 for p = 0.75 and 0.65. Here, L t = L E corresponds to a straight force chain, while L t > L E corresponds to a curved or irregular force chain Ref. (Bassett et al., 2015) (note that L t cannot be smaller than L E , so that the grey area in Figure 6 is unphysical). For small strains (ϵ ≤ 0.10), we observe that L t ≃ L E for all p. However, as ϵ increases, we find significant deviations from straight force chains for p = 0.85 and 0.75 (see Figure 6A and Supplementary Figure S3B, respectively), especially in the dense inner region, due to alternative paths that link the periphery of the force network to the cell surface. By contrast, for p = 0.65 and 0.55 and for larger κ, the difference between L t and L E remains smaller as a result of the reduced number of interconnections between force chains, indicating that most force chains follow a more or less straight path outward. The complex structure of all simple force paths that emerges can be further analyzed by computing the distribution degree P(z) of the nodes in the cluster of stretched bonds, where z indicates the number of stretched bonds connected to a node in the cluster: z = 1 corresponds to the end nodes of the force chains, z = 2 denotes linear sections of the force chains, and z ≥ 3 represents branches. In Figure 6B, we show P(z) for different p and for ϵ = 0.10 and 0.50. For high connectivities, we observe that P(z) develops a peak at z = 2 as the strain increases, with a significant number of nodes with z ≥ 3, as expected for a network of highly interconnected force chains. As the connectivity decreases, the number of such interconnections decreases as well. For p = 0.55, P(z) decreases monotonically with z, implying that most of the force chains extend only over one bond in this case, with only a few longer force chains. The number of nodes with z ≥ 3 is very small for this connectivity, indicating few branches and interconnections between force chains. Another way to characterize the force transmission in the networks is by analyzing the node betweenness centrality C B for each node in the network. This is a measure often used in graph theory to denote the importance of a certain node for transmission within a network, and is defined as (Brandes, 2001): where n i,k is the number of shortest paths between nodes i and k, and where n j i,k is the number of these paths that goes through node j. Nodes with a high C B are crossed by many shortest paths, which indicates that they have a greater influence on the transmission of forces. We show the distribution of the betweenness centrality P(C B ) for the different p and for ϵ = 0.50 in Figure 6C. For the highest connectivities, there are almost no nodes with a high C B , because there are many shortest paths in the highly connected network of force chains. For p ≤ 0.65, however, the fraction of nodes with a high C B is much larger, as indicated by the long tail in the distribution. In these networks, force transmission occurs by long linear force chains, where all bonds in the chain are essential for ensuring proper force propagation. Clearly, the removal of one node in the network for p ≤ 0.65 has a much more dramatic effect on the propagation of forces than at higher connectivities. CONCLUDING REMARKS Mechanical communication between cells relies on force transmission over large distances through the extracellular matrix (Janmey and Miller, 2011). The disordered structure of the matrix, its large mesh size and its mechanical nonlinearity make this a highly non-trivial process. Our results highlight how the connectivity of the fibre network and the bending rigidity of the collagen fibrils influence the local force transmission. On the one hand, for highly connected (and therefore relatively stiff) networks, the forces propagate isotropically in all directions over a characteristic distance that can be controlled by the contraction of the cell. On the other hand, in dilute (and soft) networks, forces propagate along a few force chains that can transmit forces over very long distances, but only in a few directions. This communication is less reliable and robust than for more rigid networks, which may be one of the reasons for the large variability and heterogeneity in cell behaviour in such matrices. In particular, networks close to the rigidity threshold (p = 0.65 in our case) are very sensitive to bending rigidity. These findings may be relevant for cell and tissue morphology and collective cell migration in environments of different rigidity. Indeed, our results are consistent with various experimental studies, which have reported that the distance over which cells can communicate mechanically appears to depend on the stiffness of the matrix (Guo et al., 2006;Reinhart-King et al., 2008;Winer et al., 2009;Janmey and Miller, 2011;Koorman et al., 2022). Furthermore, we speculate that the appearance of a characteristic transmission length that emerges around the cell when local stiffness increases may be related to the observation that cells in a colony organise at a typical distance from each other (Reinhardt-King et al., 2008). We hope that this paper will provide an incentive for future research on force transmission in disordered networks, as many questions remain open. For example, we have considered only uniform cell contraction, but previous work has suggested that cells often contract anisotropically to influence the direction of stress propagation (Baker and Chen, 2012;Koch et al., 2012;Ahmadzadeh et al., 2017). Furthermore, we have here used a rigid contractile body to model the cell. It would be interesting to study how mechanical feedback between the matrix and the cell emerges when the cell is itself modelled as a soft deformable object, for example, by treating the perimeter of the cell as a ring of springs that can stretch and bend (Gandikota et al., 2020). In such cases the anisotropic force chains may lead to spontaneous polarization of the cell. Cells can also actively restructure the matrix around them by inducing plastic deformations and thus influencing the propagation of mechanical signals (Vader et al., 2009;Kim et al., 2017). In addition, mechanical signalling may be affected by the hydrodynamic coupling between the collagen fibrils and the embedding fluid (Yucht et al., 2013;Head and Storm, 2019), as well as by the complex network composed of polysaccharides and glycosylated proteins in which collagen fibrils are embedded (Mouw et al., 2014;Burla et al., 2019b). Finally, we emphasise that the networks that we have studied here are 2D. While such networks have been shown previously to be excellently suited for characterising the mechanical properties of experimental collagen networks experimentally (Sharma et al., 2016a;Sharma et al., 2016b), it would be interesting to observe how force transmission occurs in 3D networks, introducing thus an additional degree of freedom to relax the local deformation in the network. Finally, we suggest the possibility of using graph theory to characterize the mechanical propagation and the local distortion generated by cells on the ECM. In addition to the characteristics used here, many additional descriptors can be used to characterize the network's topology, also experimentally. These parameters may be used to train neural networks, for example, to develop a machine learning-based approach to identify cellmatrix and cell-cell interactions. This could eventually be used as a diagnostic tool or help to design synthetic matrices with optimal mechanical characteristics for mechanical feedback. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
2022-07-02T15:19:57.647Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "93b3f900eabb5356c5b095a146aff78d17a4212e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "19cb6443b73157473f6cff6ad04005ed5e905f07", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255941909
pes2o/s2orc
v3-fos-license
Miniature Magnetic Nano islands in a Morphotropic Cobaltite Matrix High-density magnetic memories are key components in spintronics, quantum computing, and energy-efficient electronics. Reduced dimensionality and magnetic domain stability at the nanoscale are essential for the miniaturization of magnetic storage units. Yet, inducing magnetic order, and selectively tuning spin-orbital coupling at specific locations have remained challenging. Here we demonstrate the construction of switchable magnetic nano-islands in a nonmagnetic matrix based on cobaltite homo-structures. The magnetic and electronic states are laterally modified by epitaxial strain, which is regionally controlled by freestanding membranes. Atomically sharp grain boundaries isolate the crosstalk between magnetically distinct regions. The minimal size of magnetic nano-islands reaches 35 nm in diameter, enabling an areal density of 400 Gbit per inch square. Besides providing an ideal platform for precisely controlled read and write schemes, this methodology can enable scalable and patterned memories on silicon and flexible substrates for various applications. Introduction Magnetic storage units are the smallest patterns of magnetization in magnetizable materials used for data storage. The digital information in the binary form (0 or 1 bit) can be accessed or processed from a magnetic head [1]. To increase the capacity of magnetic storage, the dimensionality of basic elements must be reduced while preserving the stability of magnetic nanodomains [2,3]. However, superparamagnetic effects in magnetic materials limit the minimum domain size [4,5]. Patterning continuous magnetic media composed of highly coupled magnetic grains into discrete single nanostructures has been proposed. Using directwrite e-beam lithography, magnetic circular dot arrays with an areal density of ~ 65 Gbit/in 2 have been commercialized [6]. However, they have reached not only the device fabrication limit but also the characterization limit of individual elements. Therefore, advanced materials and evolutionary fabrication processes are required to further reduce the periodicity of magnetic units to <25 nm to produce patterned media with enhanced areal density (>1 Tbit/in 2 ) [7]. Magnetic oxides are ideal candidates for fabricating memory devices owing to their chemical stability, strong magnetic anisotropy, and relatively large coercivity. Heavy orbital hybridization between transition metal ions and oxygen ions results in strong correlations among different degrees of freedom [8]. Previous studies demonstrated the manipulation of magnetic anisotropy owing to oxygen octahedral interconnection at interfaces [9][10][11]. For instance, the octahedral rotation in manganite layers is steeply suppressed by inserting a singleunit-cell-thick SrTiO3 layer between films and substrates, resulting in the rotation of the inplane magnetic easy axis [9,10]. Alternatively, the control of structural parameters can be achieved by capping an ultrathin layer with dissimilar symmetry [11]. An enhanced magnetic phase transition temperature and perpendicular magnetic anisotropy in ultrathin ferromagnetic oxides were observed after interfacial modifications [12]. Furthermore, using lithographic patterning, we locally controlled the magnetic properties of oxide heterostructures [13], expanding possibilities for developing memory devices at the nanoscale. Despite the control of magnetic anisotropy by oxide interface engineering, the crosstalk between nearby magnetic domains in thin films remains unavoidable. One strategy to break centrosymmetry in continuous two-dimensional (2D) thin films is to fabricate magnetizable domains in a nonmagnetic matrix using single materials. It requires magnetic materials that are highly sensitive to structural distortions so that their magnetic and electronic properties can be directly tuned. Recent investigations revealed that the magnetic states of ferroelastic LaCoO3 (LCO) are actively correlated to epitaxial strain [14]. Small lattice distortions such as deformations and oxygen octahedral tilts (or rotations) modify the balance between crystal field splitting (∆ ) and intraatomic exchange interaction (∆ ); thus, the spin states of Co ions are reversibly switched [15]. Generally, tensile strain increases the population of higher-spin-state Co 3+ ions, resulting in a robust ferromagnetic state, whereas in-plane compression promotes lower-spinstate transition; as a result, long-range magnetic ordering cannot be formed in compressively strained LCO films [16]. Therefore, LCO has been recognized as a promising route for controlling magnetic states using strain engineering. Typically, for modifying the strain states of epitaxial films, changing substrate materials or introducing buffer layers with different compositions was reported [17,18]. However, this regulates the in-plane strain states, crystalline orientations, or rotation patterns of the films once a substrate is chosen. It is challenging to modify electronic and magnetic states regionally along the film plane. Recently, Wu et al. and Chen et al. reported the growth of ferroic oxide thin films with laterally tunable strains and orientations using acid-soluble manganite layers and water-soluble sacrificial layers, respectively [19,20]. In both cases, the original substrates, and suspended freestanding membranes served as independent building blocks for the epitaxial growth of functional oxides. Here, we report the construction of ferromagnetic nanoislands with diameters of tens of nanometers in nonmagnetic media based on cobaltite homostructures. The minimum domain size achieved using this approach enables the development of ultrahigh-density magnetic storage for various applications. Results We firstly deposited water-soluble sacrificial Sr3Al2O6 (SAO) layers (≈30 nm) [21] and SrTiO3 (STO) layers (≈3 nm) subsequently on (001)-oriented (LaAlO3)0.3-(Sr2AlTaO6)0.7 (LSAT) single-crystalline substrates using pulsed laser deposition (PLD). The ultrathin STO membrane was thermally transferred on LaAlO3 (LAO) and other substrates (glass, α-Al2O3, and silicon) after delamination from LSAT substrates (see Methods). The freestanding STO (FS-STO) membranes maintained their original shape and high crystallinity at the millimeter scale. To test the distinct physical properties, we covered the substrates partially with FS-STO membranes with an area ratio of 40%. Then, LCO films were fabricated on the modified substrates. Figure 1a shows the schematic of a grain boundary region in LCO hybrid homostructures. The LCO films (defined as LCOC) grown directly on LAO substrates showed a compressive strain of ~ -0.52%, resulting in ~0.7% elongation along the out-of-plane direction (Figure 1b). On the other hand, the LCO films (defined as LCOT) were tensile-strained by FS-STO membranes. Using both X-ray diffraction (XRD) θ-2θ scans and reciprocal space mappings (RSM), we obtained the lattice parameters of LCOT films grown directly on singlecrystalline STO substrates and freestanding membranes (Figures 2b and 2c). We found that the in-plane lattice constant of FS-STO reduced to 3.874 ± 0.015 Å, while its out-of-plane lattice constant increased to 3.920 ± 0.008 Å. This result indicates that ultrathin FS-STO membranes are more responsive to in-plane compression than the relatively thick LCO top layers. We determined that the in-plane tensile strain in LCOT films reduced from 2.5% (on STO substrates) to 1.68% (on FS-STO membranes). Microscopic structural analysis was examined by cross-sectional high-angle annular dark field (HAADF) imaging via scanning transmission electron microscopy (STEM) (Figure 1d and Figure S1). A representative HAADF-STEM image from a grain boundary region in LCO hybrid homostructures shows that the LCOC/LAO and LCOT/FS-STO heterointerfaces are atomically sharp. Both LCOC and LCOT films were epitaxially grown without apparent defects and chemical disorders. We calculated the atomic distance between A-site ions along in-plane Figure S2). We observed mainly periodic dark stripe patterns perpendicular to the interface in LCOT layers while these superstructures were absent in LCOC layers. These results are in excellent agreement with previous observations [22]. Since the degree of tensile strain reduces in LCOT layers grown on FS-STO membranes, relatively fewer stripes appear compared to the LCOT films sandwiched between STO layers [23]. Notably, the dark stripes exist only in the regions close to FS-STO membranes, indicating that the tensile strain partially relaxes with increasing LCOT film thickness. Furthermore, we performed optical secondharmonic generation (SHG) measurements on LCOC and LCOT layers independently. As indicated in Figure 1g and Figure The distinct magnetic properties of LCOC and LCOT layers on a single LCO hybrid homostructure were revealed by performing nanodiamond (ND) nitrogen-vacancy (NV) magnetometry measurements [25]. NV magnetometry was performed at zero magnetic fields and switchable temperatures ranging from 6 K to 120 K. By recording optical detected magnetic resonance (ODMR) spectra of the numbers of NV centers in a single ND, we calculated the projection of the magnetic stray field along the NV axis due to the energy splitting (2γB) in the presence of weak magnetic perturbation (B). Since NDs were dispersed on the sample surface, we measured the ODMR spectrum from either LCOT or LCOC layers at different locations independently. Figure 2c shows the ODMR spectra of a single ND on LCOT (left panels) and LCOC (right panels) at various temperatures. At low temperatures, the splitting between two resonance peaks from LCOT layers was large, and it reduced gradually with increasing temperature. However, the energy splitting from LCOC layers maintained a small but fixed value as the temperature varied from 6 to 120 K. We calculated 2γB by subtracting two resonant peak frequencies and summarized these values as open symbols in Figure 2b, although NV magnetometry cannot obtain the exact net moment of individual layers because the magnitude of 2γB depends on the number of NVs in a single ND, proximity distance, and magnetic homogeneity. The temperature-dependent 2γB yields the estimated Curie temperature (TC) of LCOT layers (~80 K), which agreed with that obtained from SQUID measurements. 2γB obtained from LCOC layers remained nearly constant at all temperatures, suggesting that M weakly depended on temperature when LCO films were compressively strained. The discrepancies between 2γB-T and M-T curves at low temperatures indicate additional PM contributions from substrates and FS-STO membranes. The SQUID measures the total M from an entire sample, while NV magnetometry is insensitive to PM signals originating from the layers/substrates under LCOC and LCOT layers, making it advantageous to apply NV magnetometry to measure magnetic properties in different regions. Field-dependent magnetic force microscopy (MFM) measurements were conducted at 6 K. Figures 2d and 2e show the MFM images of LCOT and LCOC layers collected when magnetic fields switched between -2 T and 2 T, respectively. At -2 T, the LCOT layers contained static multiple magnetic domains with reversed polarity. As the magnetic field increased from -2 T to 2 T, magnetic domains (yellow) in LCOT shrank and showed reversed polarity (blue). These results demonstrate the ferromagnetic character of LCOT layers. On the contrary, LCOC layers did not show magnetic phase contrast, and these domains barely changed with applied fields, suggesting that LCOC layers are nonferromagnetic. These results are consistent with SQUID and NV magnetometry measurements. Microscopic magnetic imaging indicates that the lateral magnetic ground states in LCO hybrid homostructures strongly depend on epitaxial strain. The correlation between strain and the electronic states of valence electrons in LCO hybrid homostructures was revealed by performing element-specific X-ray absorption spectroscopy (XAS) measurements at room temperature. Figure 3a shows the XAS results at O K-edges for LCOC and LCOT layers. The shadow region centered at ~530 eV represents the excitation of electrons from O 1s to the Co 3d-O 2p hybridization states. Both XAS curves agree well with the features of XAS at O K-edges for bulk LaCo 3+ O3 [26]. Meanwhile, the XAS results at Co L-edges showed strong peaks at ~780 eV (L3) and ~795 eV (L2) corresponding to the excitations of electrons from the 2p core levels to the 3d unoccupied states, consistent with the peak positions of Co 3+ ions [27]. These results indicate that the valence state of Co ions (+3) was maintained regardless of the film strain. Some layers did not show significant numbers of oxygen vacancies, which may influence the magnetization of LCO layers. Orbital occupancy in different regions of the LCO hybrid homostructure was further characterized by X-ray linear dichroism (XLD) using linearly polarized X-ray beams with variable incident angles, as shown in Figures 3b and 3c. When the incident angle is 90°, one anticipates that the absorption of Xrays ( 90°) arises entirely from in-plane orbitals (dx 2 -y 2 ). When the incident angle switches to 30° with respect to the surface plane, the XAS signals ( 30°) contain both dx 2 -y 2 and d3z 2 -r 2 orbital information. Comparing the difference in the peak energy between 90° and 30° (Figures 3d and 3e), we noticed that the LCOT layer under tensile strain had a lower peak energy of the dx 2y 2 orbital compared to that of the d3z 2 -r 2 orbital, while the LCOC layer showed the opposite effect. We calculated the nominal XLD value by subtracting 30° from 90° for both LCOT and LCOC layers (Figures 3f and 3g). The XLD value is negative for the tensile-strained LCOT layer, suggesting higher electron occupancy in dx 2 -y 2 orbitals, whereas the LCOC layer showed a positive XLD value, indicating that electrons preferentially occupy the d3z 2 -r 2 orbital instead. The difference in the electron occupancy determines the spin states of Co 3+ ions under different strain states. Consequently, the spin states of Co ions change from high spin states in LCOT (inset of Figure 3f) to low spin states in LCOC (inset of Figure 3g). The systematic XAS results provide solid evidence of the strain-mediated magnetic states of LCO hybrid homostructures, which agreed well with earlier magnetization characterizations. To miniature ferromagnetic LCOT domains in the nonferromagnetic LCOC matrix, FS-STO nanodot arrays were fabricated top-down using a nanoporous anodic alumina mask and then thermally transferred onto LAO substrates [28]. Subsequently, LCO films were deposited on the modified substrates, as described in Methods. The schematic of the crossbar device structure with a vertical/horizontal write and readout geometry is shown in Figure 4a. The ferromagnetic signal can be read using spin transfer torque or thermally excited magnons through crossbars consisting of heavy metals or 5d-element oxides with intrinsic large spin-orbital coupling effects [29,30]. In Figure 4b and Figure S6, we show LCOT nanodomains with a diameter of ~35 nm corresponding to an areal density of ~400 Gbit/in 2 , which supersedes those of commercialized memory devices. A higher storage density could be achieved by further reducing the size of nanoislands. The experimentally achievable lower limit for the nanodomain size is unclear, which depends on growth conditions and nanofabrication techniques. However, the size of nanodomains would not greatly influence magnetic properties because the film thickness along the growth direction can be precisely controlled to maintain its long-range spin ordering. We analyzed strain distributions within LCO nanodomains along the in-plane and outof-plane directions (Figures 4c and 4d). LCOT layers exhibited a larger in-plane lattice constant than LCOC layers, whereas the out-of-plane lattice constant of LCOT decreased slightly due to the tensile strain. Thus, strain modulation in LCOT nanoislands is valid at the nanoscale, implying that the large-area control of ordered nanoislands can promise various applications. Finally, we further demonstrate that ferromagnetic nanodomains in cobaltite homostructures can be integrated into silicon-based CMOS technology. Using the same strategy, FS-STO membranes were transferred onto silicon (Figure 4e). The LCOT layers grown on FS-STO membranes were highly epitaxial and maintained a tensile strain of ~ 1.6%. The epitaxial strain of LCOT layers was slightly smaller than that of LCOT layers directly grown on STO substrates (Figure 4f). We observed a similar orbital polarization in LCOT layers independent of the target substrate ( Figure S7), yielding identical strain effects on epitaxial LCOT layers. These LCOT layers showed a typical ferromagnetic character with a clear magnetic phase transition. TC increased by ~5 K compared to the TC of LCOT single layers. Moreover, the MS of LCOT layers on FS-STO membranes reached ~217 emu/cm 3 (~1.44 μB/Co), which is ~50% larger than that of LCOT single layers. Enhanced MS in LCOT layers can be attributed to the reduced tensile strain. Previously, it was reported that the MS of LCO films grown on LSAT was larger than that of LCO films grown on STO [16,31]. In both cases, the Co-O bond length In addition, we measured the magnetic properties of the FS-LCO membrane and LCO amorphous layers grown directly on silicon ( Figure S8). Both samples were nonferromagnetic, similar to their bulk form, revealing the influence of epitaxial strain on the stabilization of longrange spin ordering. The methodology described in the present work can be applied to arbitrary substrates, such as amorphous glass and α-Al2O3 ( Figure S9). LCOT layers maintained their high crystallinity and orientation, similar to the buffered ultrathin membranes. This also allows the design of a functional grain boundary (GB) with dissimilar orientation and strain laterally. The atomically thin GB may serve as another type of information storage media with remarkably high density in the film plane, similar to long-term ferroelectric conductive domain wall memories. Further, this work enables the replacement of ultrathin dielectric STO membranes with other multifunctional (superconducting, ferroelectric, or photoelectric) oxide membranes. The stacking of these correlated oxide membranes with controllable twist angle, stacking order, and periodicity may provide a vigorous platform for both fundamental research and applied sciences [32]. Discussion and conclusions The construction of ferromagnetic nanoislands in a nonmagnetic matrix based on cobaltites is demonstrated. The electronic and magnetic states of cobaltite homostructures were laterally modified by epitaxial strain, which was induced using membranes/substrates. A distinct magnetic contrast was obtained at the nanoscale, suggesting the capability to attain ultrahigh areal density for data storage. Furthermore, the generic method presented in this work is applicable to both silicon and flexible substrates. These results pave the way for the fabrication of nanoscale magnetic elements using distinct scenes for thin film epitaxy. NDs at the same region. The full ODMR spectrum exhibits a splatted two-peak-feature due to the Zeeman effect and can be fitted using Lorentz function. The ODMR intensity changes systematically with sweeping microwave frequency. The LCO hybrid sample was cooled down to 6 K using a closure He-recycling refrigerator. The measurements were performed progressively during warm-up process from 6 to 150 K. After each measurement, the sample was thermally stabilized for an hour to achieve the accurate temperature. XAS and XLD measurements Elemental specific XAS measurements were performed on LCO hybrid samples grown on LAO substrates at the beamline 4B9B of the Beijing Synchrotron Radiation Facility (BSRF). All spectra at both O K-edges and Co L-edges were collected at room temperature in total electron yield (TEY) mode. The LCO hybrid structures were properly grounded using copper tapes to obtain the best signal-to-noise ratio. The typical photocurrents measured using TEY mode were in the order of pA. XAS measurements were performed alternately at LCOC and LCOT regions of LCO hybrid structures. The incident angles of linearly polarized X-ray beam vary from 90° to 30° with respect to the surface plane. When the incident angle sets to 90°, XAS signals reflects the dx 2 -y 2 orbital occupancy directly, whereas the incident angle changes to 30°, XAS contains both dx 2 -y 2 and d3z 2 -r 2 orbital information. To quantify the orbital occupancy in Co eg bands, we calculated the nominal XLD by 90°− 30°. All XAS data were normalized to the values at the pre-and post-edges for direct comparison. It covers partial surface of LAO substrates, leaving a space gap layer with thickness less than 1 nm. Apparently, the LCOC layers are absent from dark strips, where the LCOT layers contain vertical aligned dark strips. This behavior demonstrates that the LCO layers exhibit distinct strain states laterally depending on their substrates/membranes underneath. Please note that the dark strips in LCOT layers only persist approximately one half of entire layers. This case is different from previous work that the dark stripes occupy the entire LCO single films grown on STO substrates. We attribute this behavior to the partial relaxation of epitaxial strain provided by FS-STO membranes. ) and (c) Atomic distance between nearby A-cite elements along the in-plane and out-of-plane direction, respectively. We found that the out-of-plane lattice constant of FS-STO is larger than its inplane lattice constant. Obviously, this result can only be attributed to the as-grown LCOT layers reversely apply a compressive-strain to the ultrathin FS-STO membranes. Meanwhile, the inplane lattice constant of LCOT layers is slightly smaller than the out-of-plane ones. (a) XRD θ-2θ scans of LCOT grown on 10nm-thick FS-STO modified glass, α-Al2O3, and silicon substrates. Dashed lines mark the 00l peak positions of LCOT layers and FS-STO membranes. "*" denotes the substrate's peaks. "#" in the middle curve identifies an undefined impurity peak. All results reinforce the epitaxial growth of LCOT layers on FS-STO membranes although the rest of LCO layers grown directly on arbitrary substrates are amorphous due to the large lattice mismatch or difference crystalline symmetries.
2023-01-18T06:42:33.115Z
2023-01-15T00:00:00.000
{ "year": 2023, "sha1": "4851843a8bb65032488753cf3614003e4a95fabd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4851843a8bb65032488753cf3614003e4a95fabd", "s2fieldsofstudy": [ "Education", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
236341385
pes2o/s2orc
v3-fos-license
Antimigration Effects of the Number of Flaps on a Plastic Stent: Three-Dimensionally Printed Pancreatic Phantom and Ex Vivo Studies : Stent migration is a significant obstacle to successful stent placement. There has been no investigation of the effect and quantitative interpretation of flaps attached to a plastic stent (PS) on antimigration. The antimigration effects of the number of flaps on a PS in a 3D-printed pancreatic phantom (3DP) and extracted porcine pancreas (EPP) were investigated. Four PS types were used in this study: stent without flaps (type 1), stent with two flaps (type 2), stent with four horizontally made flaps (type 3), and stent with four vertically made flaps (type 4). The stents were measured and compared for antimigration force (AMF) in the 3DP and EPP using a customized measuring method and an integrated measuring device. The mean maximum AMFs (MAMFs) in types 2, 3, and 4 were significantly higher than that in type 1 (all p < 0.001). Moreover, the mean MAMFs in types 3 and 4 were significantly higher than that in type 2 (all p < 0.001). When the flaps were removed from the pancreatic duct, the AMF decreased rapidly. As the number of flaps increased, the antimigration effects significantly increased in the 3DP and EPP. However, the direction of the flaps did not affect the MAMF. The position of the flaps attached to the surface of the stent affected the AMF. Introduction Endoscopic placement of a plastic stent (PS) is currently a well-accepted therapeutic option for pancreatic duct (p-duct) strictures caused by chronic or acute pancreatitis, traumatic injuries, surgical complications, pseudocysts, and malignant diseases of the pancreatic head and periampullary regions [1][2][3][4][5][6][7]. Furthermore, the placement of a transanastomotic stent using various stent materials is commonly used to prevent anastomotic leakage and subsequent fistula and stricture formation at a pancreatoenteric anastomosis. However, various stent-related complications, such as infection, stent obstruction, duodenal erosions, ductal perforation, and either proximal or distal migrations, have been reported in clinical trials [8]. Especially regarding stent migration, distal migration was rarely harmful as the migrated stent passes into the duodenum and is usually excreted. However, proximal migration further into the p-duct has been shown to occur at a rate of 5-6% and results in severe pancreatitis [9]. Stent migration is one of the significant obstacles for successful stent placement. Migrated stents pose a serious management dilemma, with some patients requiring surgical removal of the stents [10]. Most PSs in studies have been straight or sigmoid-shaped with barbs or flaps at each end to prevent migration or dislocation [11]. However, design and quantitative studies are insufficient to determine whether the number or direction of flaps or barbs prevents stent migration in the p-duct. In previous studies, there was no investigation of the effect and quantitative interpretation of flaps attached to the PS on antimigration. Therefore, four PS types, having different numbers and directions of flaps, were manufactured. Furthermore, the artificial 3D-printed pancreatic phantom with p-duct (3DP) and extracted porcine pancreas (EPP) were developed to evaluate the antimigration effects of a PS. This study investigates the antimigration effects of the number of flaps in a PS on 3DP with p-duct and EPP. Preparation of the PSs The PSs used in this study were designed and manufactured using a biocompatible polymer by a micro-extrusion process. We used polyether block amide Pebax 5533 SA MED polymer (Arkema, France), which is a representative biocompatible elastomer actively used in various polymer stents and catheters. The mechanical properties of Pebax5533 are presented in Table 1. In addition, the polymer was compounded with 20 wt% BaSO 4 to provide radiopacity. The stents were made according to our specifications (KITECH; Korea Institute of Industrial Technology, Daegu, Korea) and are not commercially available elsewhere. The stents had a tubular structure with or without flaps and were 2 mm in diameter and 30 mm in length. Each flap was 2 mm in length and projected 60 • toward the papilla. As shown in Figure 1, the flexural stress of the dried stent shaft was measured according to the vertical displacement at room temperature. A three-point bending test method was applied, and the speed at which the clamp presses the load point was set to 20 mm/min. The maximum stress of the PS shaft was analyzed to be 0.56 MPa. To analyze the effect of the number and direction of flaps on antimigration, four types of PS were prepared as follows: stent without flaps (type 1) as a control, stent with two flaps (type 2) as a commonly used example, stent with four horizontally made flaps (type 3) attached in the same direction as type 2, and stent with two horizontally and two vertically made flaps (type 4) attached with a 90 • difference (Figure 2a). Design of the 3DP The 3DP, similar to a human pancreas from an open-source standard triangulated language file, was developed using injection molding of liquid silicone rubber. The modeling of the pancreas with moldings and the 3DP were designed and made by local manufacturers (ANYMEDI, Seoul, Korea). The 3DP consisted of the pancreas and the p-duct. It was manufactured using two types of silicone with different hardnesses by considering the tissue properties of each part. The pancreas part was made of a hard-silicone material (Dragon Skin-Silicone Elastomer, Smoothon, AB, Canada) and was 200 mm in total length and designed to contain the p-duct inside. The p-duct part was 2 mm in diameter and 200 mm in length, and it was stabilized using a soft-silicone material (Vero Magenta RGD, Stratasys Ltd., CA, USA) (Figure 2b,c). Preparation of the EPP for Ex Vivo Examination This study was approved by the Institutional Animal Care and Use Committee of the Asan Institute for Life Sciences (2017- ) and conformed to US National Institutes of Health guidelines for humane handling of laboratory animals. One pig (Yorkshire; weight, 35.5 kg; Orient Bio, Seongnam, Korea) was euthanized after administering anesthesia according to the ethical procedures for pancreas extraction. Anesthesia was induced by intramuscular injection of a mixture of 50 mg/kg zolazepam, 50 mg/kg tiletamine (Zoletil 50; Virbac, Carros, France), and 10 mg/kg xylazine (Rompun; Bayer HealthCare, Leverkusen, Germany). Next, the pig was immediately euthanized by administering 75-150 mg/kg potassium chloride. The pancreas was surgically explored to evaluate the antimigration effects of the stents. Measuring Device Setup and Measurement of Antimigration Effects The measuring device consisted of a 3D-printed (Ultimaker 3; Ultimaker, Utrecht, The Netherland) jig, measuring table, load cell (KTOYO/333FB, Gyeonggido, Korea) with a measuring range of 0.25-500 g, microcontroller (Arduino UNO R3, Arduino AG, Somerville, MA, USA), and suture thread (Vicryl 4-0, Ethicon Inc., Somerville, NJ, USA). A load cell was fixed to the 3D-printed jig, which functioned as a slider on an instrument base. The force measurement of the four samples was evaluated ( Figure 3). The four PS types were analyzed for antimigration forces (AMFs) in the 3DP and EPP using customized measuring methods and an integrated measuring device ( Figure 4). The load cell of the measuring device was connected to the distal end of the PS using a suture thread. Each stent sample was placed into the p-duct of the 3DP or EPP under fluoroscopic guidance to confirm the stent position. A load cell unit was pulled at a speed of 5 mm/s on a sliding guide to measure the AMF of the PS. The total length of the sliding was 80 mm; therefore, the AMFs were measured for 16 s to stay on the set speed for each measurement. AMF was defined as the resistance force to migration between the inner surface of the p-duct models and the stent. The AMF was continuously monitored and realized by using a microcontroller connected to the load cell. A data processing code was developed in a numerical computing environment (Matlab 2018b, MathWorks Inc., Natick, MA, USA). All experiments were repeated 10 times using each PS type. Histological Examination The EPP was transversely sectioned at the proximal, middle, and distal regions of the stented p-duct and the normal region of p-duct to evaluate the possible mucosal injuries during the stent removal procedure. Tissue samples were fixed in 10% neutral buffered formalin for 24 h and then embedded in paraffin. The slides were stained with hematoxylin and eosin (H&E). Statistical Analysis Data are expressed as the mean ± standard deviation. The differences between the stent types were analyzed using the Kruskal-Wallis test or Mann-Whitney U-test, as appropriate. p-values < 0.05 were considered statistically significant. For p-values < 0.05, a Bonferroni-corrected Mann-Whitney U-test was performed to detect which stent type's cause differences (p < 0.008 as statistically significant). Statistical analyses were performed using Statistical Package for the Social Sciences (version 24.0; IBM Corp., Armonk, NY, USA). Results All PSs were successfully placed without difficulty at the base of the 3DP and EPP. The AMFs were successfully analyzed without disconnecting any suture thread during the experiments. During the measurement of the AMFs, perforations or scratched traces in the p-duct of the EPP were not detected after complete removal of all PSs. The maximum AMF (MAMF) values are summarized in Table 2. The mean MAMF was significantly different between the four types (p < 0.001) in the 3DP and EPP. In the 3DP, the mean MAMFs in types 2, 3, and 4 were significantly higher than that in type 1 (all p < 0.001). Furthermore, the mean MAMFs in types 3 and 4 were significantly higher than that in type 2 (all p < 0.001). In the EPP, the mean MAMFs in types 2, 3, and 4 were significantly higher than that in type 1 (all p < 0.001). The mean MAMFs in types 3 and 4 were also significantly higher than that in type 2 (all p < 0.001). However, no statistically significant difference was found between types 3 and 4 in the 3DP and EPP (p = 0.113 and 0.493, respectively). Table 2. The maximum antimigration force (MAMF) values in the 3D-printed pancreatic phantom with the pancreatic duct and the extracted porcine pancreas. The mean MAMFs of all types of PS in the 3DP were significantly higher than those in the EPP (all p < 0.001). MAMF (N) p-Value The continuous AMF changes are shown in Figure 5. When the flaps were removed from the p-duct, the AMF rapidly decreased. AMF changes were affected by each location of attached flaps on the surface of the PS in types 3 and 4. In the histological results, no mucosal or submucosal injuries were observed in the proximal, middle, and distal regions of the stented p-duct in types 1, 2, 3, and 4 compared with the normal p-duct ( Figure 6). In addition, perforations or scratched traces of the p-duct were not detected during removal of the PS with all types on gross examination. Discussion Our results demonstrated that the resistance to migration of types 2, 3, and 4 was significantly greater than to that of type 1. Similar results were observed when comparing PSs with four flaps (types 3 and 4) with those with two flaps (type 2). No difference was found between PSs with the same number of flaps (types 3 and 4). These findings support that the AMF proportionally increased as the number of flaps increased, and the vertical and horizontal directions of the flaps did not affect the MAMF. However, the attachment position of the flaps can affect the AMF. Our histological results demonstrated that there were no mucosal injuries caused by attached flaps during stent removal. Stent migration can be sufficiently prevented when the number of flaps is increased, and using a PS with flaps angled 60 • toward the papilla is an effective and safe strategy for preventing stent migration. When developing an antimigration stent, the attachment location and number of flaps should be considered. The endoscopic placement of a PS in the p-duct can resolve or improve symptoms in patients with ductal stricture [12,13]. Despite the several advantages of PS placement, periodic stent exchanges or re-interventional procedures are inevitable because of stent deterioration or obstruction and various other stent-related complications [14][15][16][17][18][19]. Stentrelated complications included proximal or distal migration, stent occlusion, and stentinduced p-duct changes. In addition, 5.2% of cases had proximal migration, and 7.5% had distal migration [9]. To overcome stent migration, PSs were developed in varying shapes and sizes and having varying numbers of flaps, barbs, or flanges. Depending on PS type, stent size usually ranges from 3 to 10 French, with variable numbers of internal and/or external flaps made of polyethylene [20]. The experimental results were properly acquired using the proposed device. Due to the compactness of the device that contains a small number of components with a small footprint, the 3DP and EPP experiments were accomplished in a simple manner. From the acquired data in Table 2, the MAMF in the EPP shows a significantly small value compared with that in the 3DP. This was derived from the frictional effect and elongation ratio of the test pieces. The silicone surrounding the p-duct may provide a small elongation ratio compared with that in the EPP. In contrast, the friction coefficient of the silicone can be higher than that of the p-duct tissues. These are considered major factors that made differences. Nevertheless, the overall tendency was similar in both experiments. This study has some limitations. First, the total number of PSs was relatively small for performing a robust statistical analysis, even though the Bonferroni-corrected Mann-Whitney U test was used in this study. Second, the EPP was too tenacious to faithfully reflect the characteristics (e.g., size, configuration, and quality) of the human pancreas. In vivo tests are required to precisely measure the AMF of the PS with flaps in the p-duct. Third, the push-pull gauge apparatus for measuring the AMF was operated at a fixed rate without a delicate control. Fourth, the displacement from the origin point was not clearly defined because the device does not contain any position sensors for the moving axis, which means that the migration distance was not strictly synchronized in each trial. To complement this limitation, the stent location in the 3DP and EPP was confirmed using fluoroscopic guidance. Fifth, the stents were not placed in models representing p-duct stricture but in those representing a normal pancreas without strictures. An ideal stent should be safe and effective in clinical trials. Flaps attached to a stent might effectively prevent migration without mucosal damage of the p-duct. In this study, neither PSs with flaps nor those without flaps caused mucosal damage, such as ductal perforation or scratches during stent removal in the EPP. However, further studies involving in vivo animal models are required for accurate evaluation. Conclusions In conclusion, to our knowledge, this is the first study evaluating the antimigration effects of four PS types in the 3PD and EPP. As the number of flaps increased, the antimigration effects significantly and proportionally increased in the 3DP and EPP. However, the direction of the flaps did not affect the MAMF. The attachment position of the flaps on the surface of the stent also affected the AMF. Additional studies are required to optimize the attachment position and size of the flaps on a PS. The number and location of flaps should be considered when developing a PS for preventing stent migration in clinical settings. Institutional Review Board Statement: This study was approved by the Institutional Animal Care and Use Committee of Asan Institute for Life Sciences (2017- ) and conformed to US National Institutes of Health guidelines for humane handling of laboratory animals. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical issues.
2021-07-27T00:05:40.506Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "4984ec6fb36089846b6913e7fbf7e28fdfab91ce", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/11/4830/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5bed314adb4ed53fc16fed94bc488a7ca6922695", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science" ] }
251946699
pes2o/s2orc
v3-fos-license
Antimicrobial, Antivirulence, and Antiparasitic Potential of Capsicum chinense Jacq. Extracts and Their Isolated Compound Capsaicin Bacterial, fungal, and parasitic infections increase morbimortality rates and hospital costs. This study aimed to assess the antimicrobial and antiparasitic activities of the crude extract from the seeds and peel of the pepper Capsicum chinense Jacq. and of the isolated compound capsaicin and to evaluate their ability to inhibit biofilm formation, eradicate biofilm, and reduce hemolysin production by Candida species. The crude ethanolic and hexane extracts were obtained by maceration at room temperature, and their chemical compositions were analyzed by liquid chromatography coupled to mass spectrometry (LC–MS). The antimicrobial activity of the samples was evaluated by determining the minimum inhibitory concentration. Inhibition of biofilm formation and biofilm eradication by the samples were evaluated based on biomass and cell viability. Reduction of Candida spp. hemolytic activity by the samples was determined on sheep blood agar plates. The antiparasitic action of the samples was evaluated by determining their ability to inhibit Toxoplasma gondii intracellular proliferation. LC–MS-ESI analyses helped to identify organic and phenolic acids, flavonoids, capsaicinoids, and fatty acids in the ethanolic extracts, as well as capsaicinoids and fatty acids in the hexane extracts. Antifungal action was more evident against C. glabrata and C. tropicalis. The samples inhibited biofilm formation and eradicated the biofilm formed by C. tropicalis more effectively. Sub-inhibitory concentrations of the samples significantly reduced the C. glabrata and C. tropicalis hemolytic activity. The samples only altered host cell viability when tested at higher concentrations; however, at non-toxic concentrations, they reduced T. gondii growth. In association with gold standard drugs used to treat toxoplasmosis, capsaicin improved their antiparasitic activity. These results are unprecedented and encouraging, indicating the Capsicum chinense Jacq. peel and seed extracts and capsaicin display antifungal and antiparasitic activities. Introduction Health care-related infections (HAI) caused by multidrug-resistant bacteria are increasingly common in hospitals worldwide, and ESKAPEEc (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, Escherichia coli, and Enterobacter spp.) has been the most isolated species [1]. HAIs caused by fungal species, especially Candida spp., have also increased, significantly impacting morbimortality rates and hospital costs [2]. (Figure 2A,B). Biomass inhibition was high from 750 µg/mL PSEE (Figure 2A). PSEE and PSHE also afforded the best IC 50 values: 107.1 and 751.9 µg/mL, respectively. Biofilm cell viability inhibition was high from 187.5 µg/mL PSEE (Figure 2A). Figure 3 shows how the ability of the extracts and capsaicin to eradicate the C. glabrata (ATCC 2001) and C. tropicalis (CI) biofilms varied. The samples effectively reduced the cell viability of the preformed C. tropicalis (CI) biofilm. The best samples were CPS and PSEE, which gave MBEC 50 of 187.5 µg/mL and provided 80% biofilm eradication at the highest concentrations (3000 µg/mL) ( Figure 3B). Except for PPHE, all the samples at the highest tested concentrations destroyed over 80% of the preformed C. tropicalis (CI) biofilm. Biofilm Eradication Concerning the preformed C. glabrata (ATCC 2001) biofilm, its cell viability decreased significantly when the samples were tested at concentrations above 1500 µg/mL ( Figure 3A). Cytotoxicity Assay on Host Cells BeWo cells lost viability after treatment with high PSHE and CPS doses for 24 h. The minimal dose that elicited the toxic effect was 512 µg/mL for PSHE and 256 µg/mL for CPS as measured by viability loss (Figure 4B,E). PPEE, PPHE, and PSEE did not alter cell viability at any tested concentration ( Figure 4A,C,D). BeWo cells treated with 1.2% DMSO did not lose cell viability ( Figure 4). The half Cytotoxic Concentration (CC50) against BeWo cells was 144.33 µg/mL for CPS, while for the active extracts, the CC50 was not determined. MICB50 values refer to the concentration of the sample that was able to inhibit biofilm formation by at least 50% in relation to the biomass [17]. The IC50 values indicate the sample concentration that was able to inhibit biofilm metabolic activity by half compared to the control Figure 1. Inhibition of C. tropicalis CI biofilm by the hexane and ethanolic extracts from Capsicum chinense Jacq. peel and seeds. (A-D), capsaicin (E), and amphotericin B (F). The blue line refers to the curve of the OD values of biomass in relation to the concentration of the tested sample. The black line refers to the curve of the OD values of biofilm metabolic activity in relation to the concentration of the tested sample. MICB 50 values refer to the concentration of the sample that was able to inhibit biofilm formation by at least 50% in relation to the biomass [17]. The IC 50 values indicate the sample concentration that was able to inhibit biofilm metabolic activity by half compared to the control group [18]. The low, moderate, and high classification refers to biofilm formation inhibition in biomass, where high inhibition corresponds to OD < 0.44, moderate inhibition corresponds to 0.44 < OD < 1.17, and low inhibition corresponds to OD > 1.17. This stratification was based on the classification of Candida spp. regarding biofilm formation as proposed by Zambrano et al. [19]. The black line refers to the curve of the OD values of biofilm metabolic activity in relation to the concentration of the tested sample. MICB 50 values refer to the sample concentration that inhibited biofilm formation by at least 50% in relation to the biomass [17]. The IC 50 values indicate the concentration of the sample that was able to inhibit biofilm metabolic activity by half compared to the control group [18]. The low, moderate, and high classification refers to biofilm formation inhibition in biomass, where high inhibition corresponds to OD < 0.44, moderate inhibition corresponds to 0.44 < OD < 1.17, and low inhibition corresponds to OD > 1.17. This stratification was based on the classification of Candida spp. regarding biofilm formation as proposed by Zambrano et al. [19]. (ATCC 2001) and C. tropicalis (CI) biofilms varied. The samples effectively viability of the preformed C. tropicalis (CI) biofilm. The best samples wer which gave MBEC50 of 187.5 μg/mL and provided 80% biofilm eradicati concentrations (3000 μg/mL) ( Figure 3B). Except for PPHE, all the sampl tested concentrations destroyed over 80% of the preformed C. tropicalis (C Figure 3. Percentage of viable cell eradication from the preformed C. glabrata (ATCC2001) (A) and C. tropicalis (CI) (B) biofilms at different concentrations of the hexane and ethanolic extracts from the Capsicum chinense Jacq. peel and seeds, capsaicin, and amphotericin B (C). The MBEC 50 values (Minimum Biofilm Eradication Concentration) refer to the sample concentration that was able to reduce the cell viability of the preformed biofilm by at least 50%. , and capsaicin (CPS) (E), Cells incubated with culture medium alone (negative control; black column) were considered as 100% viability. Data are expressed as means ± standard deviation. Significant differences detected by the Kruskal-Wallis test and Dunn's multiple comparison post-test are labeled (statistically significant when p < 0.05). ** p < 0.01, **** p < 0.0001. CPS Potentiates the action of SDZ + PYR to Control Parasite Growth For 24 h, intracellular T. gondii tachyzoites were allowed to grow in BeWo cells in the presence of SDZ + PYR (200 + 8 µg/mL, respectively) alone or in combination with different CPS concentrations (4 to 128 µg/mL). Association of SDZ + PYR with CPS (4 to 128 µg/mL) significantly reduced parasite proliferation compared to CPS alone ( Figure 6). In addition, only the combination of SDZ + PYR with 64 µg/mL CPS inhibited T. gondii growth significantly more effectively than SDZ + PYR alone and CPS alone ( Figure 6). Figure 6. The effects of the association of CPS and SDZ + PYR to control intracellular parasite proliferation. Infected BeWo cells were treated with CPS (4 to 128 μg/mL) alone or in the presence of SDZ + PYR (200 + 8 μg/mL, respectively) for 24 h. Next, T. gondii proliferation was quantified by measuring β-galactosidase activity. Significant differences detected by the Kruskal-Wallis test and Dunn's multiple comparison post-test are labeled (statistically significant when p < 0.05). * Comparison between CPS alone with CPS plus (SDZ + PYR). $ Comparison to both CPS alone and SDZ + PYR. $ p < 0.05, *** p < 0.001, **** p < 0.0001. Analysis of the Chemical Profile of C. chinense Jacq. Extracts by LC-ESI-MS The samples evaluated in the biological assays (PPHE, PPEE, PSHE, and PSEE) were analyzed by LC-MS-ESI to identify their chemical composition. Table 3 lists the chemical constituents present in the C. chinense Jacq. extracts. Organic and phenolic acids, flavonoids, capsaicinoids, and fatty acids were the main constituents of the ethanolic extracts from peel and seeds. As for the hexane extracts, they contained capsaicinoids and fatty acids as major constituents. Figure 6. The effects of the association of CPS and SDZ + PYR to control intracellular parasite proliferation. Infected BeWo cells were treated with CPS (4 to 128 µg/mL) alone or in the presence of SDZ + PYR (200 + 8 µg/mL, respectively) for 24 h. Next, T. gondii proliferation was quantified by measuring β-galactosidase activity. Significant differences detected by the Kruskal-Wallis test and Dunn's multiple comparison post-test are labeled (statistically significant when p < 0.05). * Comparison between CPS alone with CPS plus (SDZ + PYR). $ Comparison to both CPS alone and SDZ + PYR. $ p < 0.05, *** p < 0.001, **** p < 0.0001. Analysis of the Chemical Profile of C. chinense Jacq. Extracts by LC-ESI-MS The samples evaluated in the biological assays (PPHE, PPEE, PSHE, and PSEE) were analyzed by LC-MS-ESI to identify their chemical composition. Table 3 lists the chemical constituents present in the C. chinense Jacq. extracts. Organic and phenolic acids, flavonoids, capsaicinoids, and fatty acids were the main constituents of the ethanolic extracts from peel and seeds. As for the hexane extracts, they contained capsaicinoids and fatty acids as major constituents. Discussion There are no reports on the antimicrobial action of extracts and compounds isolated from C. chinense Jacq. However, some studies have already demonstrated the antibacterial potential of molecules isolated from other peppers belonging to the genus Capsicum against clinical isolates of Streptococcus pyogenes (MIC values between 64 and 128 µg/mL) [16] and S. aureus (MIC = 1.2 µg/mL) [14] and standard strains of Porphyromonas gingivalis ATCC 33277 (MIC = 16 mg/mL), Enterococcus faecalis ATCC 6057 (MIC = 25 µg/mL), Escherichia coli ATCC 25922 (MIC = 5 µg/mL) and Klebsiella pneumoniae ATCC 29665 (MIC = 0.6 µg/mL) [14]. However, the samples investigated herein do not inhibit bacterial growth in the tested concentration range. The fact that we did not detect antimicrobial action of the extracts and capsaicin against bacterial species, unlike what was reported by other authors, can be explained by the fact that all bacterial isolates evaluated were multiresistant. Perhaps, the way in which the tested compounds prevent bacterial growth is inhibited by some resistance mechanism that the bacteria present. The antifungal action of extracts and molecules isolated from peppers belonging to the genus Capsicum, including capsaicin, has been little reported in the literature. Most studies have been carried out with phytopathogenic fungi, such as Aspergillus parasiticus [39] and Penicillium expansum [40]. Ozçelik et al. [41] evaluated the antimicrobial action of several Capsicum spp. components against standard C. albicans ATCC 10231 and C. parapsilosis ATCC 22019 strains and found MIC values lower than 16 µg/mL. The discrepancy between these results and the results of the present study might be related to the different types of extracts analyzed in each study and to the origin of the plant material. However, we have found that PSEE, PSHE, PPHE, and capsaicin present MIC values lower than 200 µg/mL against C. glabrata and C. tropicalis, which, according to Holetz et al. [42], indicates that these samples have antifungal action. Furthermore, capsaicin exerts a fungicidal action based on the MIC and MFC values. Nevertheless, according to Dorantes et al. [43], the capsaicin mechanism of action remains unknown, but this compound is believed to lyse the cell wall and consequently kill cells. According to Pappas et al. [2] and Colombo et al. [44], C. glabrata and C. tropicalis are the most frequent non-albicans species in HAIs in North America, Latin America, and Asia. In addition, C. glabrata and C. tropicalis isolates present increased fluconazole resistance [45,46], which limits the therapeutic options available for treating invasive candidiasis. Therefore, discovering molecules from natural compounds with promising antifungal action may lead to a therapeutic strategy in the future. Candida spp. produce virulence factors during the infectious process, contributing to worsening patient prognosis [3,5]. In recent years, there has been greater interest in investigating the attributes that contribute to the pathogenicity of this genus and finding ways to inhibit or reduce virulence factor production [5,46,47]. Among virulence factors, biofilm formation plays a crucial role in Candida spp. resistance to antifungal agents [5,48]. Thus, developing and using compounds that inhibit biofilm formation should be evaluated as an important therapeutic strategy when treating invasive candidiasis [46]. The results of this study are good, especially concerning the C. tropicalis clinical isolate-some of the samples tested here can inhibit biofilm formation by 50% even at sub-inhibitory concentrations. Regarding the preformed biofilm, the concentrations needed for reducing biofilm viability by 50% or more are higher than MICB 50 , which should be expected. As the fungal biofilm matures, its architecture becomes more resistant, making it difficult for compounds with antifungal action to penetrate fungal cells [48]. No literature study has evaluated the ability of extracts from pepper belonging to the genus Capsicum and of capsaicin to inhibit biofilm formation or eradicate preformed Candida spp. biofilms, so this is the first report in this sense. However, in studies carried out with other extracts and molecules, expressive inhibition of the biofilm formed by C. tropicalis clinical isolates has only been possible when concentrations equal to or greater than the MIC were employed [49]. Likewise, at sub-inhibitory concentrations, most samples investigated herein can significantly reduce the C. glabrata and C. tropicalis hemolytic activity, indicating the antivirulence potential of C. chinense Jacq. Nevertheless, no literature study has evaluated the anti-enzymatic action of extracts and molecules from Capsicum spp. against Candida species, so this is a pioneering study in this sense. Iron absorption from the lysis of red blood cells may be related to Candida spp. resistance to fluconazole [50]. In this context, crude extracts and molecules isolated from natural compounds that have relevant antivirulence action can be evaluated as possible adjuvants in the treatment of invasive infections to reduce the pathogen's virulence and, consequently, optimize the action of antifungals at lower concentrations. Despite the relevance of exoenzymes for Candida spp. virulence, the action of natural compounds in producing these enzymes remains poorly studied, mainly in relation to hemolysin. Most literature has evaluated compounds' interference in phospholipase and proteinase production [51]. To assess the anti-Toxoplasma activity of the investigated samples, we used human trophoblastic cells (BeWo cells), a well-established in vitro experimental model widely used for studying human congenital toxoplasmosis [10]. PPEE, PPHE, PSEE, and CPS can efficiently control T. gondii intracellular proliferation in BeWo cells at concentrations that are not toxic to host cells. This highlights the selective potential of the tested compounds against parasites. Additionally, the CPS antiparasitic activity can be potentialized when it is combined with the classical treatment (SDZ + PYR) against congenital toxoplasmosis. Piperaceae extracts have distinct pharmacological properties, especially against parasites [52] and tumor cell lines [53]. However, few studies have reported their anti-T. gondii activity. Corroborating with our study, Leesombun et al. [54] assessed the effects of ethanolic extracts from Thai piperaceae plants Piper betle, P. nigrum, and P. sarmentosum against infection with T. gondii by using in vitro and in vivo models. They demonstrated that P. betle is more effective than the other extracts in controlling the parasitic infection in HFF cells and mice [54]. Similarly, the water and ethanol extract from P. nigrum and Capsicum frutescens can reduce the number of T. gondii tachyzoites in the peritoneal fluid of infected mice [55]. Although numerous studies have shown the therapeutic potential of members belonging to the family Piperaceae, most of them have been limited to crude extracts. On the other hand, for the first time, our study has demonstrated the anti-T. gondii action of extracts from C. chinense Jacq. seeds and peel. We revealed the antiparasitic action of the isolated compound capsaicin using a model of congenital toxoplasmosis. Thus, C. chinense Jacq. can be an alternative source of compounds for treating congenital toxoplasmosis, as highlighted using capsaicin, which controlled the T. gondii growth rate with low toxicity effects on the host cells. With respect to the chemical composition of extracts from C. chinense Jacq. peel and seeds, LC-ESI-MS analysis, allowed us to identify several classes of metabolites such as organic and phenolic acids, flavonoids, capsaicinoids, and fatty acids. Particularly, organic acids, phenolic acids, and flavonoids occurred only in the ethanolic extracts of C. chinense Jacq. peel and seeds. Compounds of these classes of metabolites are known in the Capsicum genus, and some of them have already been identified in C. chinense fruit [30,56,57]. Some studies with peppers have shown that qualitative and quantitative variations between their constituents may occur [56,57,59]. Some factors such as genetics, cultivar type, maturation stages, irrigation, environmental conditions, soil type, seasonality, extraction processes, and analytical methods can promote these variations [59]. Phenolic acids and flavonoids are related to several biological activities; however, their antifungal actions have been highlighted [60]. Plant extracts rich in phenolic compounds and isolated compounds such as gallic acid (6), protocatechuic acid (8), p-coumaric acid (17), ferulic acid (20), and hydroxybenzoic acid (12) exert activity against various Candida species [60,61]. In particular, flavonoid aglycones may also contribute to the activity of ethanolic extracts because many of these compounds have antifungal effects alone or in synergistic combination with conventional medicines [62]. Some studies have also shown that flavonoid glycosides alone or in combination are potent anti-T. gondii agents [63]. Capsaicinoids are another group of metabolites that can be found in C. chinense extracts. Here, we have detected capsaicin (31) and dihydrocapsaicin (34). These two compounds have already been found to be the major capsaicinoids in the fruit of C. chinense and other peppers such as Capsicum annuum and Capsicum baccatum [56]. Capsaicinoids, particularly capsaicin, have been linked to several biological properties, including antibacterial and antifungal effects [13,30,64] and antiparasitic activity [65]. Based on literature data and the results obtained herein for the capsaicin standard, this molecule, together with dihydrocapsaicin (34), may contribute to the activities we have observed for the extracts. Still concerning the chemical composition of C. chinense peel and seeds, they contain several fatty acids that have great antifungal potential, including activity against Candida spp. [66,67]. Moreover, the antiparasitic potential of fatty acids has also been evidenced [68]. Therefore, apart from capsaicin, other chemical constituents identified in the extracts may have exerted some effect during the assays. The chemical composition of C. chinense Jacq. peel and seeds agree with the chemical composition of other species belonging to the genus Capsicum and other works on C. chinense. The classes of metabolites and some identified constituents have already been shown to be active in the biological assays, justifying the promising results found in this study. After the collection and identification steps, the peel and seeds were separated and placed in a circulating air oven at 35 • C for 7 days for drying. Then, the plant material was ground in a knife mill, and the ethanolic and hexane extracts were obtained by maceration at room temperature, as described by Silva et al. [69]. Briefly, in Erlenmeyer flasks, the material from the peel (0.55 kg) and seeds (0.16 kg) was initially extracted with hexane P.A. (400 mL) at room temperature for 48 h. This procedure was repeated four times. The final volume of hexane that was used in the procedure (1.6 L) was filtered and removed in a rotary evaporator under reduced pressure at 40 • C to give the pepper peel hexane extract (PPHE) and pepper seed hexane extract (PSHE). Subsequently, the remaining plant material was extracted from the peel and seeds with ethanol P.A. (400 mL). The same steps performed for the extraction with hexane were followed, leading to the pepper peel ethanolic extract (PPEE) and pepper seed ethanolic extract (PSEE). The capsaicin used in the study was obtained commercially (Sigma-Aldrich, Darmstadt, Germany) with purity ≥95%. Analysis by High-Performance Liquid Chromatography Coupled to Mass Spectrometry The C. chinense Jacq. extracts were analyzed by LC-MS on a liquid chromatograph (Agilent, model Infinity 1260) coupled to a high-resolution mass spectrometer QTOF (Quadrupole Time of Flight-Agilent, model 6520 B) with electrospray ionization source (ESI). The chromatographic conditions were Agilent Zorbax C18 column (2.1 mm × 50 mm, 1.8 µm) and ultrapure water with formic acid (0.1% v/v) (mobile phase A) and methanol (mobile phase B). A volume of 1.0 µL of the sample (2 mg mL −1 ) was injected into the chromatograph. The gradient elution system consisted of 10% B (0 min), 98% B (0-10 min), and 98% B (10-17 min) at a flow rate of 0.6 mL min −1 . The ionization parameters were nebulizer pressure of 58 psi and drying gas at 8 L min -1 at a temperature of 220 • C; energy of 4.5 kV was applied to the capillary. The analysis was performed in the negative mode [M-H] − under high resolution (MS). The molecular formula was proposed for each compound according to a list suggested by the MassHunter Workstation Qualitative Analysis Software Agilent ® (Version 10.0) following the smallest difference between the experimental mass and the exact mass, error in ppm, unsaturation equivalence, and nitrogen rule.. The molecular ions' sequential mass spectrometry (MS 2 ) was performed at different collision energies. The chemical composition of the extract was proposed by comparing the obtained mass spectra of the fragments and the mass obtained under high resolution with other works in the literature, Metlin library [20], and PubChem database [24,[26][27][28][35][36][37]. The standard capsaicin (Sigma-Aldrich, Darmstadt, Germany) with purity ≥95% in methanol (300 µg mL −1 ) was also analyzed under the same conditions to confirm its presence in the extracts. Microorganisms Microbiological assays were performed with standard strains from the American Type Culture Collection (ATCC) and clinical isolates (CI) of multidrug-resistant bacteria and Candida species from previous research and maintained at the Antimicrobial Assay Laboratory (LEA-UFU). The assayed microorganisms were Enterococcus faecalis (ATCC1299 and CI), Klebsiella pneumoniae (ATCC13883 and CI), Pseudomonas aeruginosa (48 1997 Determination of Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal/Fungicide Concentration (MBC/MFC) The broth microdilution technique determined the antimicrobial activity of the extracts PPEE, PPHE, PSEE, and PSHE and capsaicin (CPS). This was determined by the broth microdilution technique, as proposed by the Clinical and Laboratory Standards Institute in documents M07-A9 and M27 for assays with bacterial and fungal isolates, respectively [70,71]. MIC, defined as the lowest concentration of the compound capable of inhibiting microorganism growth, was determined. The final concentrations of the tested samples varied between 0.0115 and 400 µg/mL and 0.98 and 3000 µg/mL in the bacterial and fungal assays, respectively. The antimicrobials amphotericin B (0.031 to 16 µg/mL) and tetracycline (0.0115 to 5.9 µg/mL) and the isolates C. krusei ATCC 6258, C. parapsilosis ATCC 22019, E. coli ATCC 25922, and S. aureus ATCC 25923 were used as test controls. The plates were incubated at 37 • C for 24 h and read after 30 µL of 0.02% aqueous resazurin solution was added to observe microbial growth. The development of a blue and pink color indicated the absence and presence of growth, respectively. MIC was the lowest concentration that maintained the blue color in the supernatant medium [72]. Then, 10 µL of the inoculum was removed from each well before resazurin was added to determine MBC and MFC, defined as the lowest concentration of the test sample without any microbial growth. The removed sample was plated with Muller-Hinton agar (bacteria) and Sabouraud Dextrose agar-ASD (yeasts). The presence or absence of growth was observed after incubation at 37 • C for 24 h. The tests were performed in triplicate in independent experiments. Antivirulence Action Evaluation Only the isolates that showed promising MIC results according to the classification proposed by Holetz et al. [42] were selected. For plant extracts, MIC values lower than 100 µg/mL, between 100 and 500 µg/mL, from 500 to 1000 µg/mL, and higher than 1000 µg/mL correspond to good antimicrobial activity, moderate antimicrobial activity, weak antimicrobial activity, and inactivity, respectively. Thus, only the samples and isolates for which MIC was lower than 500 µg/mL were selected. Candida spp. Antibiofilm Assay The ability of PPEE, PPHE, PSEE, PSHE, and CPS to inhibit biofilm formation and eradicate preformed biofilm was evaluated against C. glabrata (ATCC2001) and C. tropicalis (CI) based on biomass and cell viability. The tests were performed in 96-well plates, and sample preparation and assay were performed according to the broth microdilution methodology proposed by CLSI [71]. The fungal inoculum was prepared according to Pierce et al. [73]. The final concentration was adjusted to 1 × 10 6 cel/mL. Then, aliquots of this suspension were added to 96-well plates containing the samples diluted in RPMI-1640 with 2% glucose and buffered with MOPS ([N-morpholino] propane sulfonic acid) to obtain a final concentration ranging from 0.98 to 3000 µg/mL. The plates were incubated at 37 • C for 24 h for biofilm formation and adhesion. Plates intended for biomass evaluation were processed according to Marcos-Zambrano et al. [19] with modifications. Briefly, after incubation, well contents were gently aspirated, and the plates were washed three times with phosphate-buffered saline (PBS, pH: 7.2) to remove non-adhered cells. Then, the plates were fixed with methanol for 15 min and stained with 0.1% crystal violet solution for 20 min, and the solution was removed after submerging the plates in a container with distilled water. Finally, the adhered crystal was solubilized by adding 33% acetic acid for 30 min, and the absorbance of each well was determined after the plates were read in a spectrophotometer at a wavelength of 595 nm. Thus, it was possible to determine the minimum inhibitory concentration of the biofilm (MICB 50 ), defined as the lowest concentration of the sample capable of inhibiting biofilm formation by at least 50% [17]. Plates reserved for evaluating biofilm cell viability were processed according to Pierce et al. [73] and Oliveira et al. [46] with modifications. Then, after incubation, well contents were gently aspirated, and the wells were washed with PBS three times (to remove nonadhered cells). Next, 50 µL of menadione and 2,3-bis (2-methoxy-4-nitro-5-sulfophenyl)-2Htetrazolium-5-carboxanilide (MTT) with a final concentration of 0.5 mg/mL were added, and the plates were incubated at 37 • C for 6 h. After that, the formazan product was solubilized by adding 100 µL of dimethylsulfoxide (DMSO) for 10 min, and 80 µL from each well was transferred to another plate and read in a spectrophotometer at a wavelength of 490 nm. Thus, it was possible to calculate the concentration capable of inhibiting the cell viability of the biofilm by 50% (IC 50 ) [18]. Then, 100 µL of the isolate suspension with a concentration of 1 × 10 6 CFU/mL was added to the wells of the plates and incubated at 37 • C for 24 h to assess the ability of the samples to eradicate the preformed biofilm. Then, the non-adhered cells were removed by washing the wells with PBS three times, and aliquots of the samples diluted in RPMI 1640-MOPS were added to obtain a final concentration between 0.98 and 3000 µg/mL. The plates were incubated again at 37 • C for 24 h, and cell biomass and viability were determined as described above. Thus, it was possible to determine the concentration capable of eradicating at least 50% of viable cells from the preformed biofilm (MBEC 50 ) [74]. Amphotericin B (final concentration between 0.031 and 16 µg/mL) was used as a test control. Tests were performed in triplicate in independent experiments. Hemolysin Production Inhibition The ability of the samples to inhibit or reduce Candida spp. hemolytic activity at a subinhibitory concentration ( 1 ⁄2 MIC) was determined as proposed by El-Houssaini et al. [47] and Brondani et al. [75] with modifications. Briefly, from a 24-h culture of the isolate, a suspension was formed in tubes containing PBS with turbidity equivalent to tube 0.5 on the McFarland scale. Then, 500-µL aliquots of this suspension were added to tubes containing 500 µL of RPMI-1640-MOPS plus the test sample so that the final concentrations of the compounds in the tubes were equivalent to the 1 ⁄2 MIC values and that the final amount of fungal cells was 1 × 10 6 cells/mL in each tube. The material was incubated at 37 • C for 24 h. After this exposure, the tubes were centrifuged at 3000 rpm for 10 min, the supernatant was discarded, and the pellet was washed with PBS and centrifuged again under the same conditions. The procedure was repeated two more times, and the pellet was resuspended in PBS. Then, 5 µL of this suspension was deposited in equidistant points of Petri dishes containing ASD plus 7% defibrinated sheep blood and incubated at 37 • C for 48 h. The hemolytic activity was evidenced by the presence of a hemolysis halo around the colony. The hemolytic index (Hi) was determined by the ratio between the colony diameter (dc) and the hemolysis halo plus colony diameter (dcp). The results were classified as negative (Hi = 1), moderate (0.63 < Hi < 1), and severe (Hi ≤ 0.63) [76]. Amphotericin B was used as a test control. The assays were performed in triplicate at two different times. The suspension containing only the RPMI1640-MOPS isolate and broth was used as a positive control, and amphotericin B was used as a control. The test results were expressed as a percentage of inhibition of hemolytic activity according to El-Houssaini et al. [47]. Data normality was verified using the Shapiro-Wilk test. The significance of hemolysis inhibition was determined through analysis by the ANOVA One Way and Kruskal-Wallis test. GraphPad Prisma software, version 8.2, was used, and p-values lower than 0.05 were considered significant. 4.6. Antiparasitic Effects 4.6.1. Cell Culture and Parasite Maintenance Human trophoblast cells (BeWo lineage) were commercially purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA). They were maintained in RPMI 1640 medium supplemented with 100 U/mL penicillin, 100 µg/mL streptomycin, and 10% heat-inactivated fetal bovine serum (FBS) in a humidified incubator at 37 • C and in 5% CO 2 . T. gondii tachyzoites (highly virulent RH strain, 2F1 clone) constitutively expressing the β-galactosidase gene were cultured as previously described [77]. 4.6.2. Host Cell Viability PPEE, PPHE, PSEE, PSHE, and CPS were solubilized in DMSO and diluted in supplemented RPMI 1640 medium to form a stock solution of 640 µg/mL as previously published [10]. Briefly, BeWo cells (3 × 10 4 cells/200µL/well) were seeded in 96-well microplates and treated or not with different concentrations of the tested samples (ranging from 4 to 512 µg/mL; twofold serial dilutions) at 37 • C and in 5% CO 2 for 24 h. Cells were also incubated with 1.2% DMSO (concentration used in the highest treatment dose: 512 µg/mL). We referred to published data to establish the work concentration of sulfadiazine (SDZ) and pyrimethamine (PYR) (200 + 8 µg/mL). The chosen doses have been shown as non-toxic for BeWo cells [78]. BeWo cells were incubated with 5 mg/mL MTT reagent at 37 • C in 5% CO 2 for 3 h, which was followed by addition of 10% SDS and 0.01 M HCl (37 • C, 5% CO 2 , 18 h,) [79]. Absorbance (570 nm) was measured with a multi-well scanning spectrophotometer. Cell viability was expressed in percentages, with the absorbance of cells incubated with culture medium only considered as 100% viability (Viability %). Dose-response inhibition curves (log (inhibitor) vs. normalized response-variable slope) were obtained using GraphPad Prism Software version 9.3.0. 4.6.3. T. gondii Intracellular Proliferation Assay by β-galactosidase Activity BeWo cells (3 × 10 4 cells/200 µL/well) were seeded in 96-well microplates. After adhesion, the cells were infected with a highly virulent T. gondii strain (RH strain, 2F1 clone) at a multiplicity of infection (MOI) of 3:1 (ratio of parasites per cell) in RPMI 1640 medium containing 2% FBS at 37 • C in 5% CO 2 . After 3 h of invasion, the medium was discarded, and the cells were rinsed with culture medium and incubated at 37 • C and in 5% CO 2 for 24 h with non-toxic concentrations in twofold serial dilutions, as follows: PPEE, PPHE, and PSEE (4 to 512 µg/mL), PSHE (4 to 256 µg/mL), and CPS (4 to 128 µg/mL). Based on the literature, the present study used the standard treatment with SDZ + PYR (200 + 8 µg/mL, respectively) for comparison with the treatments [78]. The number of tachyzoites was calculated compared to a standard curve produced with free tachyzoites. T. gondii-infected BeWo cells incubated with a culture medium in the absence of any treatment were used as negative treatment control (non-inhibited parasite growth) [10,79]. Doseresponse inhibition curves (Log (inhibitor) vs. normalized response-Variable slope) were obtained using GraphPad Prism Software version 9.3.0. In addition, the selectivity indexes (SI) were calculated based on the CC50 BeWo cells/IC50 T. gondii ratio. Finally, we assessed whether the anti-T. gondii action of CPS could be potentialized in the presence of SDZ + PYR. Briefly, BeWo cells (3 × 10 4 cells/200 µL/well) were seeded in 96-well microplates and infected with T. gondii tachyzoites (3:1) in RPMI 1640 medium containing 2% CO 2 . After 3 h, non-invaded parasites were removed by washing with a culture medium. The cells were treated with CPS (4 to 128 µg/mL) alone or in the presence of a certain concentration of SDZ + PYR (200 + 8 µg/mL, respectively) at 37 • C and in 5% CO 2 for 24 h. As controls, the cells were incubated with a culture medium only or with SDZ + PYR alone. Finally, T. gondii intracellular proliferation was calculated by the β-galactosidase assay, as mentioned above. Conclusions The results of this study are unprecedented and encouraging. We have shown that the extracts from C. chinense Jacq. and capsaicin display antifungal action. In addition, they can significantly inhibit the production of virulence factors (such as biofilm formation and hemolytic activity) that are important for the onset and maintenance of invasive candidiasis by C. glabrata and C. tropicalis thereby indicating their antivirulence potential. Furthermore, we have demonstrated the antiparasitic potential of capsaicin at concentrations that are not toxic to host cells, which attests to the selectivity of this compound toward Candida spp. and T. gondii. However, studies involving microscopy and molecular assays are needed to elucidate capsaicin's antifungal and antivirulence action and understand the pathways through which this molecule exerts this effect. In vivo studies for evaluating the toxicity and behavior of capsaicin in the fight against pathogens in the host should also be carried out to confirm the results presented here so that in the future, this compound can be considered a possibility for treating fungal and parasitic infections.
2022-08-31T15:06:35.127Z
2022-08-26T00:00:00.000
{ "year": 2022, "sha1": "ef39bc673a027d5eb6fe6851944ee02e5bc8470b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/11/9/1154/pdf?version=1661512881", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f21134d9500d9821f682bb5a37e83b6f25b561f", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
1149066
pes2o/s2orc
v3-fos-license
A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways. scenarios. Therefore, even though there are works where AV and wavelet de-noising are performed (e.g., [6]), there is not a complete analysis and evaluation under several dynamic conditions when these two techniques are blended together in a GPS/INS-integrated system for land vehicle navigation. In this sense, the contribution of this work is the comparison among some of the most used methods for modeling the stochastic error and a complete analysis for a suitable combination between AV and wavelet de-nosing, including the selection of the level of decomposition. En used in similar works, because of its great effectiveness removing high-frequency noises, as is shown in [3][4][5][6]. However, it has a limited success in removing the long-term inertial sensors errors [7]. Moreover, Allan variance (AV) is a widely used technique in the modeling of inertial sensors, which can take into account the long-term noises [8][9][10][11]. Thereby, we present a mixture of the wavelet de-noising technique and Allan variance with the purpose of evaluating the accuracy enhancement of the inertial sensors when these methods are blended together. It is worth mentioning that most of the works where Allan variance is reported are limited to just modeling the stochastic errors of the inertial sensors, and the accuracy of the parameters estimated by the AV are only sometimes tested in real scenarios. Therefore, even though there are works where AV and wavelet de-noising are performed (e.g., [6]), there is not a complete analysis and evaluation under several dynamic conditions when these two techniques are blended together in a GPS/INS-integrated system for land vehicle navigation. In this sense, the contribution of this work is the comparison among some of the most used methods for modeling the stochastic error and a complete analysis for a suitable combination between AV and wavelet de-nosing, including the selection of the level of decomposition. This paper is organized as follows. Firstly, we begin with an introduction to the noises that are involved in a low-cost INS (micro-electromechanical systems (MEMS) grade) (Section 2). Secondly, the architecture employed to integrate INS and GPS data is described, as well as the state-space form of different error models (Section 3). Thirdly, the analysis of the underlying random processes that affect the inertial sensors is achieved by different techniques: autocorrelation, Allan variance (AV), power spectral density (PSD) and autoregressive processes (Section 4). Subsequently, the parameters of various stochastic models are obtained from the methods presented in the previous section by using experimental data collected in the laboratory (Section 5). This section also explains the combination between wavelet de-noising and autoregressive (AR) models with different orders, and the combination between AV and wavelet de-noising techniques using different levels of decomposition. Finally, the models that are identified using AV and PSD, wavelet de-noising/AR models and the proposed method based on wavelet de-nosing/AV are adapted to the loosely-coupled integration, assessed using real data collected in urban roadways and compared (Sections 6 and 7). MEMS-Based INS The strapdown inertial navigation system (INS) involves mechanization equations, which are the numerical tool to implement the physical phenomenon that relates the inertial sensor measurements to the navigation state (i.e., position, velocity and attitude) [10]. The shaded rectangle in Figure 1 represents the INS mechanization equations that can describe the motion of a vehicle, taking as input the inertial measures in the body frame (accelerations and angular rotations) and converting these measurements into a reference frame for navigation. In this case, it provides position, velocity and attitude of the vehicle with respect to the North-East-Down (NED) local geodetic frame. The complete derivation of the navigation equations is reported in [10,12,13]. Figure 1. Navigation frame inertial navigation system (INS) mechanization; figure kindly taken from [14]. The inertial measurement unit (IMU), which is part of the INS, is the device where the inertial sensors are mounted; it provides the accelerations and angular rotations along three orthogonal directions with respect to the body frame ( Figure 1). In a low-cost INS (MEMS grade), the measurement of these accelerometer and gyro sensors is affected by different errors, which can be classified as deterministic and stochastic errors. Figure 2 depicts some of these errors through a simple relationship between IMU physical signal and the sensor output. [15]. Misaligment Scale factor Bias Random Error Sensor output Physical signal Deterministic errors are due to manufacturing and mounting defects and can be calibrated out from the data; on the other hand, the stochastic errors are the random errors that occur due to random variations of bias or scale factor over time [10]. There are several errors that affect the inertial sensors: the misalignment errors are the result of non-orthogonalities of the sensor axes and are usually treated as deterministic error. The scale factor represents the sensibility of the sensor, and it is the result of manufacturing tolerances or aging; it is usually divided between a linear and a non-linear part, where the linear part is obtained from calibration, while the non-linear is modeled with a stochastic process [16]. In the case of the bias, it is divided between bias turn-on and bias-drift: the bias turn-on is constant, but it varies from turn-on to turn-on and is considered as a deterministic error; the bias-drift presents a random behavior and needs to be modeled with a stochastic process [17]. Regarding the random error (Figure 2), this is an additional signal resulting from noise of the sensor itself or other components that interfere with the signal provided by the sensor; it is also considered as part of the stochastic error of the sensor. An additional factor that also affects the inertial sensors based on MEMS technology is the temperature. However, it will not be covered in this work. For further details of temperature dependence of the stochastic error and the different errors that affect the MEMS sensors, refer to [10,16,18,19]. The deterministic errors can be minimized before implementing the mechanization equations by following different procedures through laboratory calibrations (see [19]). In this work, we focused on the stochastic error, specifically, in the bias-drift, since the stochastic modeling of this error is a challenging task, not only because of the random nature, but also because it seriously affects the performance of a navigation system. For further details of the impact of this error, refer to [20,21], where how the position error grows when different bias-drift are affecting the inertial sensor measurements has been analyzed. Therefore, a suitable estimation of the stochastic model parameters of this error will improve the performance of the INS; as a consequence, the input error to the mechanization stage ( Figure 1) can be compensated and, in turn, the position error minimized. The next section will present some of the stochastic processes that are usually used to model the bias-drift that affects the INS and their state-space representation. We will also explain the loosely-coupled architecture that is addressed to integrate INS and GPS data. Loosely-Coupled KF Integration It is common to blend GPS and INS using different integration approaches (i.e., loosely-coupled, tightly-coupled or ultra-tightly coupled; see [22][23][24]). In this paper, we confine our attention in the loosely-coupled (LC) approach, because this strategy can be used to evaluate the behavior of the inertial sensor stochastic model without any additional support during partial or complete GPS outages, which is not the case of the tightly-coupled integration, where one satellite signal available might be used to compute the Extended Kalman Filter (KF; i.e., tightly-coupled uses GPS estimates of pseudoranges and Doppler determined by using satellite ephemeris data). There are two ways to implement the LC strategy: feed-forward and feed-back. The first one is used in systems that have a high-performance inertial measurement unit (IMU), as it merges the GPS/INS information, but it has no control over the error that may occur in the IMU; it basically works with a open-loop architecture. On the other hand, the feed-back includes a close-loop that allows us to correct the INS error, where in the case of a GPS outage, the navigation solution will depend only on the INS, which will be corrected by its correspondent inertial sensor error model. The block diagram of the GPS/INS integration with feedback is shown in Figure 3. In this strategy, the position and velocity obtained from the mechanization (r n IN S , v n IN S ) are combined with the GPS, which delivers velocity and position data (r n GP S , v n GP S ). The residual error (δR n , δV n ) calculated from the GPS and INS outputs is the input to the Kalman Filter (KF), where a state-space model is built with error states for navigation and IMU errors. The error states related to the IMU errors are fed back though the closed loop in order to correct the INS navigation solution. The system model for loosely-coupled approach is given by position error, velocity error and attitude error, which represent the navigation error states, i.e., a total of nine states for 3D navigation. Moreover, the scale factors and bias for gyro and accelerometers are included in the IMU error states, and the number of states will depend on the stochastic model employed. The next section will describe the stochastic processes that will augment the state-space model with the IMU error states associated to the inertial sensor bias-drift. The corresponding stochastic model for each error state will be selected in Section 6, after having analyzed the inertial sensors data with the techniques that will be explained in Section 4. For further details about the navigation error states for the loosely-coupled integration, refer to [10,13,23]. State-Space Representation for Different Bias Models Various stochastic processes are well detailed in [26][27][28]. In this section, we will focus on the ones that will be used to augment the LC integration. The bias-drift models adapted into the LC approach are the first order Gauss-Markov, random walk and autoregressive processes, which are a generalized representation of the first mentioned one. First order Gauss-Markov (GM): This process has been widely used for modeling random errors, not only because it is able to represent a large number of physical processes, but also because it has a relatively simple mathematical description [29]. The continuous model for this process is described by the following equation:ẋ where x is a random process with zero mean, correlation time, T c , and driven noise, w. The corresponding discrete time equation can be written as: where ∆t is the sampling time and w k is a white noise with noise covariance: where σ 2 x k = σ 2 GM is the covariance of the process. The continuous time representation of the noise covariance can be expressed as: where β is the inverse of the correlation time, T c . Once the correlation time (T c ) and the covariance of the process (σ 2 x k ) are obtained by Allan variance (see Section 4.4), the model of the first order GM process can be implemented as a state-space in Extended Kalman Filter (EKF), either with Equation (1) or Equation (2), depending on if the transition matrix is either in continuous or discrete time. Random walk (RW): This process results when uncorrelated signals are integrated, e.g., when white noise is integrated during the mechanization stage. The continuous and discrete time of the RW are represented by:ẋ = w (5) where w is a white noise with noise covariance q k = q(t k+1 − t k ) = σ RW 2 ∆t. The uncertainty of the random walk increases with time; therefore, it is a non-stationary process [26]. However, it can be considered stationary within small time intervals [30]. The noise covariance of the RW process can be obtained from power spectral density or Allan variance analysis, which will be described in Sections 4.3 and 4.4, respectively. This process is used to represent rate/acceleration random walk (K). A typical bias-drift of a inertial sensor can be represented by a combination of different random processes, such as white noise (WN), RW and first order GM processes. These processes can be added into the KF by writing them in a state-space model. According to the previous definitions, a random process that combines WN, RW and first order GM can be generated using the following discrete time-invariant state-space model: y k = 1 1 where σ W N is the standard deviation of the white noise process and y k is the result of combining WN, RW and first order GM. Equations (7) and (8) are easily adapted into the KF equations, since they are represented in state-space form. In this example, the bias-drift (y) would be modeled by the combination of three noises, i.e., y = W N + 1 st GM + RW . Autoregressive (AR) process: An AR process is a time series produced by linear combination of past values, which can be described by the following linear equation [31]: where x(n) is the process output, which is a combination of past outputs, plus a white noise, w(n), with standard deviation, β 0 ; p is the order of the AR process and α k are the model parameters. In order to include the AR process in the EKF transition matrix, it is necessary to express Equation (9) in state-space form. If we consider a third order AR process, the corresponding state-space form can be expressed as follow [29]: This represents the AR model in state-space for one of the inertial sensors. It should be noted that if the order of the AR model increases by one, the variables in the state vector of the Kalman filter will increase by six, since this model is applied to each axis of inertial sensors. The stochastic processes that are used to model the inertial sensors bias-drift are augmented into the Kalman filter, as was explained in this section. In order to obtain the parameters of each stochastic process, an analysis of the sensors data needs to be done. The methods addressed to get these parameters are discussed in Section 4, and the experimental analysis of each method is presented in Section 5. Identifying and Extracting Stochastic Model Parameters The stochastic modeling of the inertial sensors is a challenging task that in most practical cases, is performed by tuning the GPS/INS Extended Kalman Filter, which is often sensitive and difficult, by using sensors available specifications, but low-cost sensors do not provide enough information to develop this sort of models, or by experience [32]. Therefore, there are different works that have been achieved in order to obtain a suitable estimation of the stochastic model parameters [5,6,8,9,33,34]. In this section, we describe the most used methods for noise identification and extraction of the noise parameters for stochastic modeling of inertial sensors. Additionally, an introduction to the wavelet de-noising technique is presented at the end of the section. Autocorrelation The autocorrelation function has been used in previous works to analyze the stochastic error of the inertial sensors [5,33] and also to obtain the parameters for modeling, using the first order Gauss-Markov (GM) process. As it was explained in the previous section, this process seems to fit a large number of physical processes with reasonable accuracy. For a random process, x, with zero mean, correlation time, T c , and driven noise, w, the first order Gauss-Markov (GM) process is described by Equation (1). The parameters needed to implement this process can be extracted from its autocorrelation function (Figure 4), which is given by: where the correlation time is T c = 1/β and σ 2 is the variance of the process at zero time lag (τ = 0). The most important characteristic of the first order GM process is that it can represent bounded uncertainty, which means that any correlation coefficient at any time lag, τ , is less or equal the correlation coefficient at zero time lag, R xx (τ ) ≤ R xx (0) [26]. One of the limitations of this method is that an accurate autocorrelation curve from experimental data is rarely done, due to the fact that the data collected is limited and finite. As it is discussed in [29], the accuracy of the autocorrelation depends on the recorded length data. In [5,33,34], it was shown that the autocorrelation function of experimental inertial sensor data might not be as a first order GM process, which is equivalent to a first order autoregressive process. This means that only a first order autoregressive process may not be adequate to model the bias-drift behavior that affects the performance of the inertial navigation system. In fact, in most of the cases when low-cost IMU are used, the shape of the autocorrelation follows higher order Gauss-Markov processes. As a consequence, higher orders of autoregressive processes are more appropriate to model inertial sensors stochastic errors [35]. Despite this, the autocorrelation analysis can be useful to determine the correlation grade of the underlying random processes that affect the sensors and, also, if the uncorrelated noise can be removed after filtering the sensor signal. This issue will be discussed in Section 5.2. Autoregressive Processes To avoid the problem of inaccurate modeling of inertial sensor random errors, as in the case with the low-precise autocorrelation function, described in Section 4.1, another method, which was introduced in [5], can be applied. There are different works where the autoregressive (AR) models have been evaluated, some of them are well detailed in [5,31,33,34]. Although first order Gauss-Markov (GM) process has been very useful for modeling random errors of inertial sensors, better stochastic modeling can be achieved by modeling these errors as higher order AR models [33]. In addition, the autocorrelation of the random error for MEMS sensors often seems to follow a higher order GM process, which can be modeled using an appropriate AR model. According to Equation (9), this is assumed that the coefficients (β 0 , α k ) are computed so that the linear system is stable, making the model stationary [26]. It should be noted in Equation (9) that if p = 1, then the AR process approximates first order GM processes. On the other hand, if p = 1 and α 1 = −1, it becomes a random walk (RW), and if α 1 = 0, it would be a white noise (WN). The coefficients of this process are estimated by Burg's method, since it overcomes some of the drawbacks of other methods by providing more stable models and improved estimates with shorter data records [36]. In this paper, we focus on AR models up to the third order, since a higher order would increase the computational load and might result in unstable solutions [5]. This method is usually used after applying wavelet de-noising to the static inertial sensor data, which is explained in Section 4.5. Power Spectral Density Power spectral density (PSD) is an important descriptor of a random process, because it provides information of the signal that is not easy to extract from the time domain. The PSD is related to the autocorrelation function with: where, S x (jw) is the power spectral density of the process, x, F [·] indicates Fourier transform, and R xx (τ ) is the autocorrelation of the process, x [37]. Basically, the PSD is used to identify the stochastic errors of the inertial sensors from the frequency components, and the parameters obtained from the PSD are eventually used in the stochastic model of the INS. Figure 5 depicts a hypothetical inertial sensor PSD in single-sided. According to this curve, the noise sources might be identified considering the slopes, i.e., a slope of −2 represents the rate\acceleration random walk noise for gyro and accelerometer, respectively. Obviously, the number of random noises that might be present in the curve depends on the type of sensors. The noise terms that can be identified with the PSD are well detailed in [8,11,37]. Although it is not covered in this paper, it should be mentioned that recently, a more effective analysis in the frequency-domain has been presented by El-Diasty and Pagiatakis, where a GPS/INS impulse response model that is applied in the bridging GPS outages using as input the INS-only navigation solution is developed; for further details about this method, refer to [38]. So far, we have presented the autocorrelation, where the stochastic model parameters are extracted from the autocorrelation curve, the autoregressive processes that estimates the coefficients of an AR model applying Burg's method over the de-noised sensor data and the power spectral density that identifies the noise terms based on the slopes in a log-log PSD curve. The following section will describe the Allan variance technique, which is similar to the PSD, but in the time domain. Allan Variance The Allan variance (AV) is a time domain analysis technique originally developed to study the frequency stability of oscillators [39]. More recently, this has been successfully applied to the modeling of inertial sensors [6,19,[39][40][41], and two key documents to determine the characteristics of the random processes that give rise to the measurement noise of the sensors using this technique are [8,11]. As such, AV helps in identifying the source of a given noise term in the observed data [11]. The Allan variance is estimated as follows: where T represents the correlation time, or cluster time, i.e., the time associated with a group of n consecutive observed data samples, N is the length of the data that will be analyzed and θ is the output velocity, in the case of the accelerometers, and output angle, in the case of the gyros; these measurements are made at discrete times from the inertial sensors. The basic idea to estimate the AV is to take a long sequence of data (N ), where the IMU is in a static condition. After having removed the turn-on bias from the gyros' and accelerometer's stored data, the output of the inertial sensor is integrated to get θ. Thus, the AV can be computed through Equation (13). In AV, the uncertainty in the data is assumed to be generated by noise sources of specific character, as for instance, rate random walk, angle random walk, bias instability, etc. In order to obtain the covariance of each noise source affecting the sensor output, it is necessary to analyze the computed AV result by Equation (13). This is usually achieved by plotting a log-log AV curve, as is depicted in Figure 6, from which the covariance values for each error can be extracted doing a similar analysis to the one performed with the PSD curve. Figure 6. Hypothetical Allan variance (AV) of an inertial sensor; AV plot from the IEEE Std 952-1997 [11] . The AV obtained from Equation (13) is related to the two-sided PSD by: where S x (f ) is the PSD of the random process, x, written in Equation (12). An interpretation of Equation (14) is that the Allan variance is proportional to the total noise power of the sensor output when passed through a bandpass filter with transfer function sin 4 (πf T )/(πf T ) 2 . This filter depends on T , which suggests that different types of random processes can be examined by adjusting the correlation time (T ). Thus, the AV provides a mean of identifying and quantifying various noise terms that exist in the data [11]. Computation of AV needs a finite number of clusters that can be generated from the raw data measurements of the sensors. Depending on the size of these clusters, AV can identify any noise term that is affecting the data sensor. It is important to mention that the estimation accuracy of the AV for a given T depends on the number of independent clusters within the data set [11]. The bigger the number of independent clusters, the better the estimation accuracy. It has been described in [8] that the percentage error of AV, σ(δ), in certain σ(T ) and with a data set of N points is given by: where N is a set of data points collected from the sensors and n is the number of data points of the cluster in estimating σ(T ). Equation (15) shows that the estimation errors in the region of short cluster length, T , are small, as the number of independent cluster in these regions is large. On the other hand, the estimation error in the region of long cluster length, T , are large, as the number of independent clusters in these regions is small [8,11]. For example, if 360,000 data points are collected from an inertial sensor and if we want to compute the estimation accuracy of the AV for a bias instability ( Figure 6) with a characteristic time of 10 min, we will have 60,000 points with a sampling frequency of the sensor equal to 100 Hz. According to Equation (15), the percentage error of the AV for this random process would be approximately 32%. The following section presents wavelet de-noising technique, which will be combined with autoregressive processes, as well as Allan variance. Wavelet De-Noising The Discrete Wavelet Transform (DWT) is a widely used technique in digital signal processing, and one of its characteristics is that allows us to do a multiresolution analysis. Basically, when DWT is applied to a signal, x(n), this is filtered with low-pass, h 0 (n), and high-pass, h 1 (n), filters (the coefficients of each filter depend on the wavelet function). Subsequently, a sub-sampling by two is done. Wavelet multiple levels of decomposition (LOD) are obtained by repeating this stage on the sub-sampled output of the low-pass filter, h 0 (n), as shows Figure 7. After applying DWT, the spectrum of the signal, x(n), is divided into different sub-bands with different resolutions, as can be seen in Figure 8. The most significant coefficients of the signal, x(n), are the approximations (A k ). This means, that they have the majority of the information of the signal, while the high-frequency components are know as details (D k ), and as its name says, they are details of the signal, x(n), that in most cases, are high-frequency noise components. ( ) x f f Moreover, wavelet de-noising takes advantage of the sub-band decomposition performed by the DWT and removes the noise by eliminating the frequency components that are less relevant; in general, this procedure is called wavelet de-nosing and is well described in [3,35,42,43]. This technique is the current state-of-the-art technique used in the accuracy enhancement of inertial sensors [3][4][5]7]. Since inertial sensors are composed by long-term and short-term noises, wavelet de-noising can be applied in order to remove part of the high-frequency components (short-terms noises). Although wavelet de-noising of INS sensors has had limited success in removing both noise components, it has been combined with AR processes and the autocorrelation function by using the inertial sensor measurements in static conditions. Basically, when it is applied in the autocorrelation method, the uncorrelated noise is removed using wavelet de-noising in order to obtain a smooth autocorrelation function that can be associated to a stochastic process. In the case of the AR process, wavelet de-noising is applied, and then the AR coefficients are estimated from the residual noise. Wavelet de-noising might be used to remove long-term noises (low-frequency) by increasing the level of decomposition that at the same time, increases the number of frequency bands that can be de-noised. However, in land-vehicle applications, these low-frequency components consist not only of long-term noises, but also vehicle motion dynamics. Since wavelet de-noising can be used to remove the high-frequency components and the AV method can be used to model the long-term noises without removing the vehicles motion, these two methods are combined in order to enhance the INS accuracy. The mixture between these two techniques is addressed in the following section, as well as the experimental analysis for each method explained. Inertial Measurement Unit and Data Acquisition In order to evaluate and compare the previous methods, the static data for analysis was obtained from the IMU 3DM-GX3-25 MEMS grade of MicroStrain ( Figure 9). It combines a triaxial accelerometer, triaxial gyro, triaxial magnetometer and temperature sensors. It also includes analog anti-aliasing filters, which accomplish a noise filtering. This stage is followed by a two stage digital moving average filter [44], and the on-board processor of the IMU 3DM-GX3-25 implements these filter stages. Additionally, all quantities are temperature compensated and are mathematically aligned to an orthogonal coordinate system. The IMU was configured with a sampling frequency of 100 Hz, and the second moving average filter stage implemented in the microcontroller was adjusted with a filter width of 15; this means an attenuation of 14.16% at 20 Hz; for further details of this digital filter, which is embedded on the IMU, see [45]. The characteristics provided by the manufacturer can be seen in Table 1. The test for static analysis was conducted in a room temperature at the Navsas laboratory, Politecnico di Torino [46]. Seven hours of static data were collected in order to analyze the inertial sensors data with the methods that were explained previously. The following sections provide details of the analysis achieved for this IMU data. Table 1. 3DM-GX3-25 IMU characteristics. Acc, accelerometer. Autocorrelation Analysis After the seven hour-length data collecting, we used the autocorrelation method to achieve the analysis of the random errors that affect the accelerometers and gyroscopes of the IMU. Nevertheless, before processing the raw samples, we removed the turn-on bias for each sensor. Then, the high-frequency terms were attenuated by applying the wavelet de-noising technique. The idea in this step is to minimize the uncorrelated noise that is present in the sensors. Subsequently, the autocorrelation is calculated (Figure 10(b)), and the corresponding parameters should be extracted from the curve. In the case of the first order GM process, they would be stated as T c and σ, respectively. Figure 10(a) depicts the normalized autocorrelation function of the accelerometers before applying de-nosing, while Figure 10(b) corresponds to the autocorrelation curve after de-noising with six levels of decomposition using Daubechies 4 as the wavelet function. This autocorrelation shows clearly that the residual noise of the x-axis accelerometer after applying wavelet de-noising is still dominated by terms that are uncorrelated. With respect to the other two-axes accelerometers (i.e., y-axis and z-axis), their correlations seems to have more correlated terms than in the x-axis accelerometer case, so a high order autoregressive model could be used to model their residual noise, since the autocorrelation curve is similar to the curve of high order AR processes (see [5,26]). The same wavelet de-noising procedure was repeated to analyze the gyroscope's characteristics. The results are depicted in Figure 11(a). This curve shows that the signal for the three gyroscopes is mainly dominated by short-term noises (high-frequency components), which are related to white noise. After applying wavelet de-noising with six levels of decomposition using Daubechies 4 as the wavelet function ( Figure 11(b)); the autocorrelation shows that the three gyros have similar characteristics, and although part of the uncorrelated noise was removed, the remaining signal for the gyroscopes still has a representative white noise component. In the case of inertial sensors based on MEMS technology, the assumption that the stochastic error follows a first order Gauss-Markov process is not valid in most of the situations. This can be visible by comparing Figure 4 with Figures 10(b) and 11(b), where it can be seen that they are different from the autocorrelation function of the first order Gauss-Markov process. This is because these sensors are composed by more complex noise types, and first order Gauss-Markov is only a rough approximation of this complex structure of noises. Nonetheless, for the sake of comparison with the different models and to validate this analysis, a first order AR process is also assessed in Section 7. Figure 10. It is worth mentioning that the uncorrelated noise could be minimized by applying more levels of decomposition during the wavelet de-noising procedure, or a very high order autoregressive model could be used to create the model. However, the use of such a complex AR model in the integration filter would drastically increase the matrices sizes, as well as the computational burden. In addition, due to the fact that the autocorrelation has some other limitations (see Section 4.1), the method that will be analyzed in the following section is more appropriate to model higher order autoregressive processes. AR Models Since the autocorrelation is a low-accurate technique to identify the noises affecting a low-cost INS, a method based on AR models have been used to overcome this issue (see [5]). It consists in combining AR processes and wavelet de-noising to reduce high-frequency noise and, consequently, to obtain the AR coefficients from the residual noise. In other words, after minimizing the short-term error (high-frequency components) with wavelet de-noising, the residual noise could be modeled by an AR model. For static drift data of the inertial sensors, the approximation part of the DWT includes the earth gravity, the earth rotation rate frequency components and the long-term error, while the detail part of the DWT contains the high-frequency noise and other disturbances [5,43]. By working with inertial data collected in a stationary condition, we first applied the wavelet de-noising technique, and then, the AR model coefficients were estimated with Burg's method. This procedure is executed for each sensor and for two AR models: first and third order. In this work, the attention is focused on these two models, because the first order AR models is one of the most used in the navigation field, and also up to the third order, because as it is explained by Nassar et al. [5,34], the higher order would increase the computational load and might result in unstable solutions. Table 2 depicts the parameters obtained with Burg's method for each inertial sensor using the wavelet de-noising characteristics described in the previous section. It shows the coefficients for the first and third order AR process that correspond to the stochastic process explained in Section 4.2. These AR model coefficients are estimated after computing wavelet de-noising in stationary conditions, which was described in Section 4.5. Table 2. Autoregressive process coefficients for each inertial sensor obtained with Burg's method after wavelet de-noising with six LOD. PSD Analysis The power spectral density was implemented using Welch's method, since this has been found to have the widest application in engineering and experimental physics [47]. In this case, we have applied a Fast Fourier Transform with 2 20 data points for the seven hours of the data collection. The results for the PSD are shown in Figure 12(a) for accelerometer data. Figure 12(a) depicts the one-sided PSD for accelerometers data. This log-log plot shows a bunch of high-frequency components, which makes it difficult to identify noise terms and obtain parameters of the stochastic model. The variance in these short-terms noises may be decreased by averaging adjacent frequencies of the estimated PSD [17]; this task can be accomplished by using a technique that is called frequency averaging; further details of this technique can be found in [37]. Figure 12(b) shows a PSD curve after applying frequency averaging; it can be noticed that the noise term identification is easier than in Figure 12(a), and although the low-frequency part of the PSD plot has a high uncertainty, it still conveys some information [37]. According to Figure 5, which was presented in Section 4.3, there are three types of noise: the acceleration random walk (K), the bias instability (B) and the velocity random walk (N). Figure 12(b) shows that the z-axis accelerometer has a bias instability (slope −1) smaller than the other two accelerometers, and the velocity random walk is almost the same for all the accelerometers (slope 0). The values for each noise parameter (B,N,K) were extracted drawing straight lines for each frequency band influenced by the noise. The interception of each line with a specific point was taken into account. For instance, the PSD curve for the z-axis accelerometer is plotted in Figure 13; it also includes straight dotted lines for each noise, N, B and K, with their respective slopes, 0, −1,−2. The acceleration random walk (K) is present in the low-frequency components between 1 × 10 −4 Hz and 2.29 × 10 −3 Hz. This parameter is obtained by fitting a straight line with a slope of −2, starting from 1 × 10 −4 Hz, until it meets the vertical line of f = 1 Hz. Thus, the acceleration random walk for the z-axis accelerometer is determined as: For details of the intercepts to determine the noise parameters, see [11,37]. The bias instability (B) is the dominant noise between 2.29 × 10 −3 Hz and 7.1 × 10 −2 Hz, with a slope of −1, while the velocity random walk (N) is present between 0.1248 Hz and 20 Hz. After 20 Hz, there is an attenuation, because of the digital moving average filter, which is used to minimize high-frequency spectral noise produced by the MEMS sensors. Regarding the gyroscopes, Figure 14(a) represents the power spectral density, while Figure 14(b) corresponds to the gyros PSD after applying frequency averaging; in the latter, was identified as angle random walk (N) and bias instability (B), following the same procedure as with the accelerometers. Table 3 summarizes the values of different errors that affect the inertial sensors using PSD method. In order to check the validity of these noise coefficients obtained with the power spectral density, AV analysis is presented in the following section. AV Analysis For Allan variance analysis, the acceleration and the angular rate were integrated to obtain the instantaneous velocity and angle. Subsequently, the log-log plot of Allan variance standard deviation versus cluster times (T ) was obtained after evaluating Equation (13). The results are plotted in Figure 15(a) for the accelerometer and Figure 16 for gyro data. Figure 15(a) shows the AV estimated on the 3DM-GX3-25 accelerometers. According to Figure 6, which was presented in Section 4.4, the accelerometers are affected by three types of error: velocity random walk (N), bias instability (B) and acceleration random walk (K). It confirms that z-axis accelerometer has a bias instability (slope 0) smaller than the other two accelerometers, and the velocity random walk is almost the same for all the accelerometers (slope −1/2), which is coherent with the results obtained with the PSD. The values for each noise parameter were extracted as in the PSD, drawing straight lines for each error with its corresponding slope, but in this case, the interceptions are different. To clarify, Figure 15(b) depicts straight lines for each noise of the z-axis accelerometer. In this case, the accelerometer has N, B and K with slopes −1/2, 0 and 1/2, respectively. It can be seen that the dominant noise in short cluster times is the velocity random walk, while the dominant error in long cluster times is the acceleration random walk. From the straight line with slope −1/2 fitted to the beginning of the N noise, a value, σ = 0.047 (m/s/h), at a cluster times of 1 h can be read. Since the velocity random walk (N) is present in a cluster time interval where the number of independent clusters is very large, the estimation accuracy of the AV is approximately 1.1%. Thus, the velocity random walk or, in other words, the noise term (N) for the z-axis accelerometer is determined as: The Allan variance standard deviation versus cluster times (T ) for gyro data is depicted in Figure 16. Unlike accelerometers, the gyroscopes have all similar characteristics, where two types of noises can be recognized: angle random walk (N) for short cluster times and bias instability (B) for long cluster times. For the x-axis gyro (blue curve), the bias instability is present in the time range between 321.92 (s) and 654.01 (s). The value of this error can be measured with a flat line at 29.57 (deg/h). Dividing this standard deviation by the factor 0.664, as suggested in [11], the B coefficient can be achieved: For further details of the intercepts of each noise term in the log-log AV curve, see [8,11,16]. Table 4 summarizes the error coefficients with their respective uncertainty for accelerometers and gyro data. The correlation time, (T c ), of the bias instability, (B), and the standard deviation for each sensor (STD) of the IMU 3DM-GX3-25 are shown in Table 5. The correlation time, (T c ), might be used in Equation (1) for modeling the bias instability (B) as a first order Gauss-Markov process; this value is obtained from the segment of the curve where the bias instability is the dominant noise, i.e., the flat segment of the log-log Allan variance curve. It should be mentioned that not only these parameters, but also the whole parameters obtained from AV need to be manually tuned in the KF, since the values obtained from AV are considered an initial approximation of the bias-drift [48]. This verifies the results that were obtained with PSD analysis, where velocity random walk (N), bias instability (B) and acceleration random walk (K) for accelerometers data and angle random walk (N) and bias instability (B) for gyro data were also identified. It can be seen that most of the estimated values in PSD (see Table 3) are within the confidence interval computed by AV ( Table 4). The next section presents the inertial sensor error model that mixtures of AV and wavelet de-noising techniques. Wavelet De-Noising with Allan Variance In order to combine wavelet de-noising (WD) and Allan variance under dynamic conditions, it is necessary to process the inertial sensors measures with wavelet de-noising before computing the mechanization (see Figure 3), which leads to the following question: how many levels of decomposition should be applied? In this case, the number of levels of decomposition (LOD) for the DWT are chosen based on the the spectrum of the signal after the DWT is applied. We have to consider that each level of decomposition divides the spectrum of the signal, x(n), into different sub-bands, as was explained in Section 4.5. This means that if the sampling frequency of the inertial sensor is f s = 100 Hz, after applying one LOD, we will have a spectrum between 0-25 Hz for the approximations coefficients (A 1 ) and a spectrum between 25-50 Hz for the details coefficients (D 1 ), considering perfect filters. Therefore, the frequency band of the wavelet de-nosing output will be limited to f s /(2 × 2 k ) for the more relevant coefficients (A k ), where k is the level of decomposition (LOD). Since the idea is to preserve the frequency components that are associated with the motion dynamics of the land vehicle, we consider that these motion dynamics are low-frequencies components for land-vehicle applications (e.g., between 0 and 5 Hz), as is mentioned in [31]. Therefore, we evaluated the number of LOD from the one that nearly reaches 5 Hz and higher levels, i.e., considering the approximation coefficients (A k ). Thus, the test was achieved using the Matlab Wavelet Toolbox from three LOD, where the band of approximation coefficients is limited to 6.25 Hz (100/(2 × 2 3 ) = 6.25), up to eight levels of decomposition, where the output band is limited to 0.1953 Hz (100/(2 × 2 8 ) = 0.1953). This is taking into account that we use a sampling frequency of f s = 100 Hz for the inertial sensors. These experiments were assessed using the Daubechies family, specifically, "db4", as the wavelet function, with soft thresholding based on Stein's Unbiased Risk Estimate (SURE), since these parameters are typically used in pre-filtering inertial sensors [4,7,31]. After selecting these wavelet de-noising parameters, the data collected in the laboratory was de-noised and, subsequently, processed with the AV algorithm. Figure 17 depicts the Allan variance standard deviation versus cluster times (T ) for the z-axis accelerometer (red curve) after applying wavelet de-noising with three and eight levels of decomposition (blue curves). According to this plot, wavelet de-noising removed the short-term noises, while the long-term noises remain without attenuation, as was expected. It is also noticed that the higher the level of decomposition, the more high-frequency components are removed. If we consider these two cases-the first one applying wavelet de-noising with three LOD and the second one applying eight LOD (Figure 17)-the most relevant components that correspond to the motion dynamics of the vehicle would have to be above 0.16 s and 5.12 s (vertical black dotted lines) for each case, respectively. If these components that relate to the motion dynamics are not above these cluster time values, they would be attenuated by the de-noising filters, which could degrade the INS accuracy. Given that these motions of the vehicle are mixed with the long-term noises, a suitable LOD should be selected with the purpose of not removing relevant components that would compromise the performance of the navigation system. Therefore, to analyze the effect of wavelet de-nosing, we evaluated the enhancement accuracy of the GPS/INS solution with two vehicle tests, where a total of seven GPS outages were introduced under different dynamic conditions with a duration of 30 s and 60 s (see Figure 18), respectively. A similar procedure was achieved in [35] with a tactical-grade (medium-accuracy) and navigation-grade (high-accuracy) IMUs. The performance of the GPS/INS solution (i.e., without error models) during GPS outages with wavelet de-nosing under different LOD is summarized in Table 6. It depicts the outage number, the average speed and the maximum horizontal error for each GPS outage that was assessed. The LOD 0 corresponds to the navigation solution without applying wavelet de-noising. In the case of three LOD, we apply one level of decomposition less for y-axis and z-axis inertial sensors, since the uncorrelated noise is not so dominant for the other inertial sensors, as can be seen in the autocorrelation analysis described in Section 5.2. Table 6 shows that the navigation solution performs slightly better for most of the GPS blockage when seven LOD are applied, compared to the navigation solution without applying wavelet de-nosing (i.e., zero LOD), with an improvement of almost 4.3% in terms of horizontal positioning error. The wavelet de-nosing parameters that provided the most significant enhance accuracy of the GPS/INS solution are summarized in Table 7. It represents the levels of decomposition where the most relevant energy associated with the motion dynamics of the vehicle remain. In this case, the most significant frequency components of the vehicle motion dynamics for the 3DM-GX3 IMU are below 0.78 Hz for y-axis and z-axis accelerometers, while for the rest of inertial sensors, it is below 0.39 Hz. The use of Stein's Unbiased Risk Estimate (SURE) as a threshold rule helps us not to loose coefficients associated with the vehicle, since it is a conservative threshold that is usually used when small details of the signal lie in the noise range [49]. Having selected the LOD for wavelet de-noising, the long-term noises are modeled and compensated by the AV parameters obtained in Section 5.5. Overall, under dynamic conditions, wavelet de-noising will be computed for inertial sensor measurements prior to the INS mechanization (see Figure 3), and the AV model will be in charge of compensating the long-term noises. The next section explains the way the AV model and each model obtained so far is adapted into the loosely-coupled strategy. INS Bias Model Adapted to the Loosely-Coupled KF Having identified the random errors using AV and PSD, the parameters obtained with AV were used in the loosely-coupled GPS/INS integration scheme (Figure 3) to model the errors of accelerometers and gyros of the IMU under test. The stochastic model parameters for each sensor are taken from Tables 4 and 5. Thus, the 3DM-GX3-25 accelerometers stochastic error a se was modeled as: where the noise term associated to N is modeled as white noise (WN), the noise term associated to K as a random walk (RW), while the bias instability (B) is modeled as a first order Gauss-Markov process (first GM). Regarding the 3DM-GX3-25 gyro stochastic error, g se , the model was defined as: where the noise term associated to N is modeled as white noise (WN) and the bias instability (B) is modeled as a first order Gauss-Markov process (first GM). The latter noise can be modeled by a combination of Markov noise states [37], and there are also different approaches to model the bias instability noise terms; some of them are presented in [16,50]. In this case, a first order Gauss-Markov process was fitted to the flat part of the AV curve taking into account B and its correspondent correlation time (T c ) (see Table 5). Regarding the noise term angle random walk (N), it presents dominant high-frequency components that have a correlation time much shorter than the sample time. Therefore, this noise is modeled as additive noise with noise variance obtained from the parameter, N (see Table 4). On the other hand, the AR coefficients obtained from Burg's method are adapted into the KF taking parameters that were shown in Table 2. These stochastic error models were implemented in the KF according to the state-space forms that were presented in Section 3.2. Further details about IMU error state-space implementation in the Kalman filter can be found in Appendix A.1. Results and Discussion As explained in Section 3, we use loosely-coupled integration with feed-back, which corrects the INS error through a close-loop. The INS error dynamics equations are built in the KF, having initially nine states for position, velocity and attitude error plus additional states to estimate the bias of each sensor of the IMU. The EKF was adapted for each designed bias model in order to evaluate the accuracy of the stochastic processes that were obtained from the previous analysis. Firstly, the two models extracted from AV\PSD were implemented, so the vector error states of the Extended Kalman Filter were augmented with six and nine states, respectively. The latter error model was combined with wavelet de-noising in order to evaluate the enhancement accuracy when Allan variance parameters and wavelet de-noising techniques are blended together. Finally, two autoregressive models were assessed augmenting EKF with six and 18 states. Table 8 summarizes the stochastic models for the 3DM-GX3 sensors and the number of states that are required in the loosely-coupled GPS/INS integration. The EKF for the loosely-coupled integration has 15 states for two models: one is the model obtained with AV\PSD, where the bias instability (B) of both accelerometers and gyro are modeled with a first order Gauss-Markov process (GM) plus velocity\angle random walk (N), which is modeled as white noise (WN) for accelerometers and gyros, respectively. The second model with 15 states is a first order AR model. Although it is not depicted in Table 8, the AV model that was mixed with wavelet de-noising corresponds to the case of EKF with 18 states. From here on, the abbreviations 15AR, 27AR, 15AV, 18AV and 18AVWD may be used when referring to the 15 state AR, 27 state AR, 15 state AV, 18 state AV and 18 state AV with wavelet de-noising models, respectively. Table 8. Number of states in the loosely coupled integration architecture for different error models. States 18 States 27 States AR a se \g se 1 st orderAR 3 rd orderAR In order to assess the performance of the inertial sensor error models, a car was equipped with the 3DM-GX3-25 MEMS grade IMU, which was integrated with the Sat-Surf platform with a u-blox LEA-5X receiver [46]. The specifications of the IMU can be found in Table 1. The experimental test setup that was installed inside the car is provided in Figure 19. This platform with the navigation instruments was mounted in the car rear, including the power supply that was delivered by one battery of 12 volts DC. Two data sets were collected in urban roadways inside the city of Turin, Italy. After the data collection campaign, the loosely-coupled integration architecture with the error models presented in this paper were evaluated. Although there were no GPS outages during the campaigns, we intentionally introduced several GPS outages off line, lasting 30 s and 60 s. During an outage, the system works in prediction mode only, and the accuracy of the loosely-coupled's performance relies entirely on the INS error model and, in particular, on the INS bias model. Therefore, it is straightforward to consider different outage lengths and different vehicle's dynamic conditions in order to have a clearer answer on the accuracy of the bias models under investigation. It is really worth mentioning that since these results are based on the loosely-coupled strategy, the simulated outages have complete GPS signal blockages. The GPS/INS solution without any outages was used as a reference to compare the performance of the different error models during the simulated GPS signal blockages. The first trajectory that was used to asses the different sensor error models is shown in Google Earth map (Figure 20(a)). This road test is part of the whole trajectory and lasts near 17.3 min; we have acquired the data from the IMU with a sampling frequency of 100 Hz. The GPS signal blockages that were intentionally introduced during post-processing are depicted in Figure 20(b) (shown as blue lines overlaid on the red trajectory), in which there are three outages with a duration of 30, 60 and 30 s for outage 1, 2 and 3, respectively. These artificial GPS outages include straight and turn portions of the trajectory in an urban roadway, which comprise typical conditions of a real GPS signal degradation inside a city. Table 9 summarizes the computation of the maximum and the mean horizontal position error for the error model solutions during the outages of the first trajectory. This table also shows the duration of each outage and the average speed during the three outages that were introduced in this road test. During the first GPS outage (Figure 21(a)), there is a turn out of approximately 90 degrees; this is a challenging segment of the trajectory to evaluate the bias models, since there is an abrupt change in heading angle. From the correspondent horizontal position error (Figure 21(b)), it can be noticed that the 18AVWD model produces the minimum horizontal error, less than 15 m for almost the whole GPS outage. The mean horizontal error for the 18AVWD model is 12.56 m, while the same error parameter for the 27AR model is 19.62 m. Regarding the 15AV and 18AV state models based on Allan variance parameters, it can be seen that they are slightly similar, since the only difference is the acceleration random walk (RW (K); see Table 8 Table 9, this outage lasts 60 s, and the average speed is about 42.31 km/h. This outage, introduced in a straight portion of the trajectory, shows that the 18AVWD model is better than the AR models and the other stochastic error models based on AV parameters. To further validate the performance of the different stochastic error models, a second road test trajectory was collected in some urban roadways in the city of Turin; there is also a part of the path on a highway in the outskirts of the city. The road-test trajectory is 15.05 min long and is depicted in Figure 23(b). Table 10 summarizes the mean and the maximum error for each error model analyzed, as well as the average speed and the duration of each outage that was introduced off-line. Regarding GPS outage 5 ( Figure 24(a)), it includes a turn out with an average speed of 52.25 km/h. According to the correspondent horizontal error (Figure 24(b)), it can be observed that the correction that is achieved by the 18AVWD error model is bigger with respect to the one applied through the other methods, and it has a maximum horizontal and mean position error of 53.61 m and 36.50 m, respectively, during 30 s of absence of the GPS signal. As far as GPS outage 7 is concerned, it has been simulated along a straight portion of the path, including a slight curve at the end of the outage (Figure 25(a)). This GPS blockage lasts 60 s, having an average speed of the vehicle of 112.42 km/h. This GPS outage was intended to evaluate the stochastic error models under high speed conditions, and same as in the previous GPS blockages, the 18AVWD performed better than the other models. In order to summarize the maximum and mean position error for both trajectories and each GPS outage performed, Figure 26 shows a comparison between the error model solutions for the seven GPS outages introduced during the test campaigns made in the city of Turin. Overall, the model based on AV and wavelet de-noising is the one that has provided the best accuracy in most of the cases under investigation. For instance, taking into account the results that are depicted in Figure 26(a), the combination of the Allan variance parameters and wavelet de-nosing model (18AVWD) has an improvement in terms of horizontal positioning error of 50.95% over the the first order AR model (15AR) maximum horizontal position error. Furthermore, the 18AVWD provides an improvement of 48.20% over the third order AR model (27AR). Regarding the models obtained from AV, the model 18AVWD has shown an improvement of 31.89% and 26.06% over the 15AV and 18AV, respectively. Considering the mean error in horizontal positioning (Figure 26(b)), the blending of the Allan variance parameters and wavelet de-nosing (18AVWD) allowed an improvement of 39.75% over the the first order AR model (15AR), and it also has a better accuracy of almost 41.94% with respect to the third order AR model (27AR). In the same way, the 18AVWD has provided an improvement of 27.67% and 25.13% over the 15AV and 18AV, respectively. We can also clearly appreciate how 18AV shows better results compared with 15AV in most situations where the GPS signal is not available (see Figure 26), since it offers a more adequate representation of bias-drift, according to the noise terms identified with AV and PSD (i.e., the addition of the noise source associated to the acceleration random walk (K) for each of the accelerometers-18AV ). Moreover, the performance of the AR models are lower than the ones obtained with AV; the explanation of this fact is commented on next. As far as the AR technique is concerned, the main objective of using AR models and wavelet de-noising is to remove the uncorrelated noise of the inertial sensors as much as possible. In fact, if we are able to remove the main quantity of the uncorrelated noise, we can then obtain a smooth autocorrelations curve, and the noise can be modeled with an higher order Gauss-Markov process (e.g., third order AR model), with a consequent benefit on the accuracy and performance of the GPS/INS system. Unfortunately, this is not the case of the low-cost inertial sensors (MEMS IMUs) we have used in this work, since, as is shown in Section 5.2, the autocorrelation function of some of the inertial sensors after processing the data with the de-noising technique does not have a smooth autocorrelation curve, which makes the estimation of the parameters less accurate compared to the parameters obtained with AV (i.e., 15AV and 18AV). Another option to get a more accurate estimation of the bias-drift can be achieved by using higher order AR models (for instance, in reference [33], the authors use an AR model with 120 states). In this case, we adopted a tradeoff between complexity and accuracy, and we selected 27 states in the AR modeling. At last, the mixture between AV and wavelet de-noising has shown much better enhancement accuracy of the INS than the others methods presented in this work compensating for the short-term and long-term noises that affect the inertial sensors. Conclusions and Recommendations In this work, different stochastic error models for the measurement noise components of a MEMS-based IMU have been derived from experimental data and compared, specifically, autoregressive/wavelet de-noising models, Allan variance and Allan variance/wavelet de-noising. These stochastic models obtained from several techniques were adapted to the loosely-coupled strategy integration. Additionally, their performance was assessed in a low-cost navigation application by means of intentionally introducing several GPS outages in two trajectories collected in real urban roadways. The artificial GPS blockages were introduced in straight and curved portions of the trajectories comprising conditions of real GPS signal degradation inside a city. Although AR processes combined with wavelet de-noising are commonly used for modeling INS stochastic errors, due to the fact that they have more modeling flexibility than first order Gauss-Markov, random walk and white noise processes, it is necessary to consider that the autocorrelation function of the IMU's raw measurements in static condition is expected to be a smooth curve (after de-noising), to use a low-order AR model, but this desired situation does not always apply for a low-cost inertial sensors (MEMs grade) As was mentioned, the inertial sensors (MEMS grade) are affected not only by short-term noises, but also by long-term noises. Minimizing the latter is not an easy task, since these are combined with vehicle motion dynamics. In this work, we evaluated a error model that is a mixture of AV parameters and wavelet de-noising techniques (18AVWD); this model showed better performance than the other traditional methods based on AV and AR models during different GPS outages; specifically, with the 18AVWD model, we got a maximum horizontal error of 53.61 m with respect to 92.51 m (15AV), 96.32 m (18AV), 181.38 m (15AR) and 82.74 m (27AR). The 18AVWD stochastic error model uses the parameters obtained from AV to compensate for the long-term noises, while wavelet de-noising is employed to minimize the short-term noises that affect the inertial sensor of the IMU. Therefore, the wavelet de-noising technique has once again demonstrated its utility for removing the short-term noises of the inertial sensors. Nevertheless, other adaptive filtering techniques based on wavelet packet could be used in the future to get even better results, as the structure of decomposition of the sensor signal could be adapted according to the vehicle motion dynamics. It is also important to mention that, depending on the application, the selection of the decomposition level has to be carefully analyzed, due to the fact that frequency components that are associated with, e.g., vehicle motion dynamics may be eliminated after performing a de-noising technique. It is well known that the AV technique presents drawbacks, such as: uncertainty of large clusters, so it requires large data sets to generate consistent AV curves [50]. Despite this, Allan variance uses a relatively simple procedure to characterize the random errors, and it has been successfully used in works, such as [16,40]. We plotted Allan variance together with wavelet de-nosing in the same log-log curve after applying different levels of decomposition, which has allowed us to analyze the error term attenuation and the vehicle motion dynamics. By exploiting a combined use of the AV and wavelet de-noising, we have shown how to enhance the position accuracy in a GPS/INS integrated system without excessively increasing the complexity of the INS error model. We combined AV/Wavelet de-nosing and evaluated different levels of decomposition showing that although some vehicle motion components might be attenuated; we verified by simulation that the selected LOD provide more benefits concerning position accuracy. Moreover, since we were dealing with a low-cost IMU, we noticed that it required many levels of decomposition to attenuate part of the uncorrelated noise and observe an enhancement in the position accuracy using wavelet de-nosing, which is not the case for high-end IMUs. In the future, the error models analyzed in this paper could be adapted in more complex GPS/INS integration strategies, such as tightly-coupled, in order to enhance the position accuracy by using GPS estimates of pseudoranges and Doppler. To include the bias of the inertial sensors (i.e., Equations (19) and (20)), the transition matrix in the discrete time is augmented from the initial nine states, as in Equation (21): where ∆t is the sampling time of the INS i.e., 0.01 s, while β a and β g correspond to the reciprocal of correlation time, (T c ), presented in Table 5 for accelerometers and gyros, respectively. This transition matrix was described for one inertial sensor in Equation (7); in the case of the three accelerometers, the expression, diag(β a ∆t), is given by: The complete error states after adapting Equation (21) into the first nine state of the KF is presented in Equation (23): δx = δr n δv n δψ n δb a,bi δb g,bi δb a,k T (23) where δb a,bi and δb g,bi are the bias-drift associated to the first order GM process for accelerometers and gyros, respectively, while δb a,k is the bias-drift of the accelerometers that represents the random walk process. The design matrix, G, for the 18 error states AV can be written as: where C n b is the frame rotation matrix from the body to the n-frame [14,27]. The noise covariance matrix Q of the model is: where q a,n , q a,bi and q a,k are the spectral densities of the process driving noises for each noise term that are used to model the bias-drift of the accelerometers (i.e., WN, RW and first order GM process). Similarly, q g,n and q g,bi are the noise variance quantities that will be used within the KF to model the bias-drift of the gyros (i.e., WN and first order GM process). All these quantities are computed based on the parameters that were obtained with the AV technique (see Tables 4 and 5). In order to describe the relationship between the parameters obtained from the experiments (i.e., N, B and K), and the noise process that are modeled (i.e., WN, first order GM process and RW), we take as an example the x-axis accelerometer, so the spectral density in discrete time of the process driving noises of a WN process can be expressed as: q ax,n = σ 2 W Nax /∆t = N 2 ax /(3600 * ∆t) where σ 2 W Nax is the variance of the white noise process and N is the velocity random walk associated to the x-axis accelerometer from Table 4. The spectral density, q ax,bi , in discrete time for the first order GM process is given by [29]: q ax,bi = σ 2 GMax 1 − e −2∆t/Tc,ax (28) where T c,ax is the correlation time from Table 5 and σ 2 GMax is the covariance of the first GM process that can be determined by means of the bias instability parameter for the x-axis accelerometer (B ax ) from Table 4. σ GMax = B ax * 0.664/3600 The spectral density, q ax,bi , in discrete time for the random walk process can be expressed as: where σ 2 RWax is the noise covariance of the RW process and K ax is the acceleration random walk for the x-axis accelerometer from Table 4. IMU error state-space for the 27 states with third order AR models: The transition matrix in the discrete time used to augment the KF with a bias-drift modeled as a third order AR process for each inertial sensors can be described by Equation (31): where α a 1 , α a 2 and α a 3 are vectors with the coefficients for the three accelerometers, while α a 1 , α a 2 and α a 3 are the vectors with the coefficients for the three gyros. This transition matrix was described for one inertial sensor in Equation (10); in the case of the three accelerometers, the expression, diag(α a 1 ), is given by: The complete error states of the KF will have 27 states, which are given by: δx = δr n δv n δψ n δb a,b1 δb a,b2 δb a,b3 δb g,b1 δb g,b2 δb g,b3 T where δb a,b1 , δb a,b2 and δb a,b3 are the nine states associated to the third order AR models of the three accelerometer, while δb g,b1 , δb g,b2 and δb g,b3 are the nine states associated to the third order AR models of the three gyros. The design matrix, G, for the 27 error states based on the third AR models can be written as: In this case, the noise covariance matrix, Q, of the model is: where q a,n , q g,n are the same spectral densities of the white noise processes described in the 18 states AV models, while q a,b and q g,b are the the spectral densities of the third order AR processes for each inertial sensor. In the case of the y-axis accelerometer, the spectral density in discrete time is by given by: where β 0 is the standard deviation of the AR process.
2014-10-01T00:00:00.000Z
2013-07-24T00:00:00.000
{ "year": 2013, "sha1": "d544777b3bec996dbda2cc7a794fad5dcd6dcd97", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/13/8/9549/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "975121c9d4a63dc0e68af84ebc3272b74398a31f", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ] }
55591827
pes2o/s2orc
v3-fos-license
Allele-Specific Polymerase Chain Reaction for Detection of Main gyrA Allelic Variants in Helicobacter pylori Strains Background: Rapid detection of resistant strains of Helicobacter pylori in human clinical samples is of major importance in clinical settings. Inability of conventional clinical laboratory techniques in detection of these strains usually leads to failure of prescribed therapeutic regimens. Objectives: The aim of this study was designing a simple and rapid allele-specific PCR (AS-PCR)-based method for detection of more frequent gyrA mutations at Asn87Lys codon, responsible for emergence of fluoroquinolone resistance in H. pylori strains. Patients and Methods: All bacterial strains were obtained from clinical biopsy samples in our laboratory. Identification of the isolates was performed by the genusand species-specific primers and allele-specific primers, designed to match with the site of the point mutations. Samples with positive results for the designed PCR method were sequenced to verify the existence of the target mutations. Results: Point mutations in the gyrA gene at Asn87Lys codon (AAT > AAA and AAC > AAG) were detected in all standard resistant strains as well as some of clinical isolates with previously determined resistance phenotypes for fluoroquinolones. Presence of the target mutations was successfully confirmed in all the control strains by the newly designed primers and sequencing. Conclusions: The designed AS-PCR was a good and reliable method for detection of AAT > AAA and AAC > AAG point mutations in H. pylori isolates. Background Helicobacter pylori, a Gram-negative bacterium found on the luminal surface of the gastric epithelium, was first isolated by Warren and Marshall in 1983.It induces chronic inflammation of the underlying mucosa, chronic, acute, and atrophic gastritis and gastric, and duodenal ulcer diseases (1,2).Treatment of the infecting strain and its eradication can lead to treatment of the diseases.Unsuccessful treatment of the bacterium and carriage of H. pylori is strongly associated with the risk of atrophic gastritis development, which is a precursor lesion to gastric cancer (3,4).It is estimated that H. pylori-positive patients have a 10-20% lifetime risk of developing ulcer disease and a 1-2% risk of developing distal gastric cancer (5).Eradication of H. pylori is still a challenge, because of the rapidly increasing prevalence of multidrug resistant strains worldwide (6).In consequence of the increasing resistance of H. pylori against common antibiotics, next generation drugs, such as quinolones, will be required for eradication therapy in the future.Resistance of H. pylori to fluoroquinolones, happening through the effect of DNA gyrase A subunit on the basis of point mutations in the quinolone resistance-determining region of the gyrA gene, has recently become common (7,8).In fact, H. pylori strains with reduced susceptibility to fluoroquinolones have a mutation at codon 87, Asn of the gyrA gene (9).New researches by Wang et AAA and AAC > AAG) in all fluoroquinolone-resistant isolates, respectively.These results showed that frequency of these mutations was high in H. pylori isolates of different populations (10)(11)(12). Usage of unprescribed drugs, inappropriate administration of quinolones for common infections by other bacteria, and spontaneous mutations happening during the bacterial infection cycle in the human body, can lead to emergence of newly developed resistant strains, which could be challenging (13).Rapid detection of these resistant strains in clinical samples is important for controlling their dissemination in the community.Due to the difficulty of the culture-based method for common antimicrobial susceptibility testing approach, development of rapid and simple molecular techniques can overcome this need.Because of the heterogeneity of gyrA gene among H. pylori strains in different geographic areas, we developed a rapid test based on an allele-specific PCR (AS-PCR) to detect more frequent mutations of Asn-87Lys (AAT > AAA and AAC > AAG) in the gyrA gene among different H. pylori isolates in Iran.The gyrA mutations of H. pylori causing reduced susceptibility to fluoroquinolones could be detected by this method.The AS-PCR method for detection of the gyrA mutations in H. pylori can be useful for easy identification of the targeted fluoroquinoloneresistant strains of H. pylori in the clinical samples (3). Objectives The aim of this study was designing a simple and rapid AS-PCR-based method for detection of more frequent gyrA mutations at the Asn87Lys codon, responsible for emergence of fluoroquinolone resistance in H. pylori strains. Bacterial Strains All of the used bacterial strains in this study were obtained from clinical biopsy samples in our laboratory at Gastroenterology and Liver Diseases Research Center, Tehran, Iran.The isolates were identified by culture and genus and species-specific PCR primers for 16s rRNA and glmM; the primer sequences are listed in Table 1 (14,15).Susceptibility of the isolates to ciprofloxacin, as a member of fluoroquinolones, was determined based on the agar dilution method, as described before (16).PCR conditions were as follows: Total volume of PCR mixture was 25 mL; 0.12 pM of each primer, 2 mM/L MgCl 2 ; 0.16 mM/L dNTP, 1.5 U Taq DNA polymerase, and 200 ng DNA sample.PCR was performed in a thermocycler (AG 22331; Eppendorf, Hamburg, Germany) under the following conditions: initial denaturation for 4 minutes at 94˚C followed by 30 cycles of 94˚C for 1 minute, 57˚C for 45 seconds, and 72˚C for 1 minute.After a final extension at 72˚C for 10 minutes, the PCR products were examined by electrophoresis on 1.2% agarose gels according to the standard procedures.Bacterial isolates were selected based on their previously reported resistance phenotype to ciprofloxacin and targeted mutations in gyrA gene. Primer Design and Bioinformatic Investigations Allele-specific primers were designed, in which the first nucleotide from the 3′ end match the site of the targeted point mutation. Furthermore, the second and third nucleotides from the 3′ end were designed to produce base pair mismatches, to attain high specificity in the AS-PCR for the mutant alleles.Specific binding capacity of the primers was checked by Iranian and other available H. pylori sequences in the GenBank database (JQ 323555-587) (16). Site-Specific PCR Experiments All H. pylori strains used in this study were confirmed by genus (16s rRNA) and species (glmM)-specific primers.DNA extraction of the isolates was performed by nucleospin tissue DNA extraction kit (MACHEREY-NAGEL GmbH & CO, KG).The control strains harboring mutations of Asn87Lys (AAT > AAA and AAC > AAG) were obtained from domestic resistance strains, previously sequenced in our laboratory.PCR conditions included denaturation at 95˚C for 4 minutes, followed by 25 cycles of denaturation at 94˚C for 1 minute, annealing at 51˚C for 45 seconds, and extension at 72˚C for 25 seconds, with final extension at 72˚C for 10 minutes for AAT > AAA primer, and denaturation at 95˚C for 4 minutes, followed by 25 cycles of denaturation at 94˚C for 1 minute, annealing at 53˚C for 45 seconds, and extension at 72˚C for 25 seconds, with final extension at 72˚C for 10 minutes for AAC > AAG Primer.PCR was performed in a thermocycler (AG 22331; Eppendorf, Hamburg, Germany).The PCR mixture (final volume of 25 μL) contained super Taq (Gen Fanavaran, Iran), 0.12 pM of each primer, 0.16 mM/L dNTP and 1.5 mM/L MgCl 2 .The primer sequences are listed in Table 1.Different clinical isolates with defined resistance phenotypes against fluoroquinolones were also analyzed for screening of the target mutations.The amplicons were analyzed by electrophoresis in 1% agarose gel (Fermentas, #R0491, Lithuania) in Tris-boric acid-EDTA buffer and stained with ethidium bromide.The gels were then photographed under the ultraviolet light (UVIdoc, UVItec Limited, Cambridge, UK).The products were amplified with allele-specific primers.Lane 1, positive control (OC3) for AAC > AAG mutation; lane 2 and lane 3, positive controls (OC192, OC273) for AAT > AAA mutation; lane 4 and lane 5, negative controls for AAC > AAG and AAT > AAA mutations, respectively; lanes 6, 7, 8, 9, randomly selected clinical isolates. Sequencing Experiments To confirm the PCR results for each mutation, the amplicons were purified and subjected to sequencing by using the amplification primers and an applied biosystems (ABI) terminator cycle sequencing ready reaction kit (big Dye® terminator V3.1 cycle sequencing kit) on an ABI 3130xl genetic analyzer.The sequences obtained were edited manually and aligned using gene runner software (version 3.05).Analyses of the sequences were performed by Lasergene software (version 6.0), comparing with the reference sequences.The sequences were then submitted in GenBank. Results All the used strains in this study were from clinical samples.The isolates were identified as H. pylori by specific PCR for 16s rRNA and glmM (Figure 1).Presence of the targeted mutations had been confirmed in all the positive control strains (GenBank accession numbers: oc192:rcgldsh22_oc273:rcgldsh30_oc3:rcgldsh13).Bioinformatic analyses of the designed primers showed their efficacies with some criteria as presence of no bulge loop and internal loop, and optimal 3′ ∆G range.As shown in Figure 1, the primers clearly differentiated the wild type strains from the gyrA mutants.Unspecific bands were present in some strains.To delete these bands, concentration of MgCl 2 , extension time and number of cycles were changed in different PCR reactions.The best results were obtained at 1 mM MgCl 2 concentration.Point mutations in the gyrA gene at codon Asn87Lys (AAT > AAA and AAC > AAG) were detected in all standard resistant strains and some clinical isolates with previously determined resistance phenotypes for fluoroquinolones.Similarity of the used annealing temperature and PCR conditions for these two primers, proposed their capacity for usage in same PCR reactions. Discussion Diverse assays have been developed to investigate genetic polymorphisms in human pathogens, including PCR-restriction fragment length polymorphism analysis, AS-PCR, multiplex PCR, single-strand confirmation polymorphism analysis, oligonucleotide ligation assay, and real-time PCR (9).AS-PCR is a simple, cost-effective and excellent genotyping method in this regard.For mutations in the gyrA gene of H. pylori, AS-PCR does not require restriction enzyme cleavage, purification of PCR products, or a real time PCR machine that can increase its application for routine tests in clinical laboratories.However, this method has some limitations, such as designation of primers at the boundary of the targeted mutations, which can affect the PCR results.In this study, PCR amplification was performed with allele-specific primers, in which the first nucleotide from the 3′ end was designed to match the site of the point mutation; furthermore, the second and third nucleotides from the 3′ end were designed to produce two mismatches to attain high specificity in the AS-PCR for the mutated alleles.The results showed highquality, reproducibility and particularity along with the tested isolates.A number of the resistant isolates did not present positive PCR results for these mutations, proposing existence of other mutations in gyrA or other resistance mechanisms among the studied samples.This limitation suggested design of new primers as well as examination of other mutations.In a study by Nishizawa et al. heterogenicity of H. pylori strains in different geographic areas was presented as the main reason for weakness of AS-PCR in the cases of noted mutations in the gyrA gene (9,17).The developed technique is useful for finding out the bacterial susceptibility to fluoroquinolones within several hours.Diverse mutation levels of H. pylori strains in different geographic regions may cause assay failure in detecting other genetic gyrA variants found in different countries (18).Designing allele-specific primers for other common mutations, responsible for development of resistance phenotypes in H. pylori strains against this group of antibiotics, helps us to improve their eradication by rapid detection in the laboratories. In conclusion, according to the high frequency of gyrA mutations at the Asn87Lys codon among fluoroquinolone-resistant strains of H. pylori, we developed a reliable PCR-based technique to detect these mutations without bacterial culture and minimum inhibitory concentration determination.Results of this study successfully confirmed specificity of the designed primers in detection of the targeted mutations. Figure 1 . Figure 1.Electrophoretic Patterns of glmM and 16s rRNA in Different Helicobacter pylori Isolates Figure 2 . Figure 2. Electrophoretic Patterns of Amplicons From H. pylori Strains With Different gyrA Mutations al. in China, Chung et al. in Korea, and Chisholm et al. in UK, reported frequencies of 36.66 %, 48.27 % and 23.5 % for mutation of Asn87Lys (AAT >
2018-12-11T10:33:33.850Z
2013-10-15T00:00:00.000
{ "year": 2013, "sha1": "7c80665c00c67e7185997158f839e848c4de3a17", "oa_license": "CCBYNC", "oa_url": "https://brief.land/archcid/cdn/dl/dddc05b2-50b8-11e7-8314-1371790314ed", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "7c80665c00c67e7185997158f839e848c4de3a17", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
215220340
pes2o/s2orc
v3-fos-license
2-Phenyl-1H-1,3,7,8-tetraazacyclopenta[l]phenanthrene There are two molecules in the asymmetric unit of the title compound, C19H12N4. One is almost planar [dihedral angle between the fused-ring system and the phenyl ring = 2.16 (13)°] and one is somewhat twisted [dihedral angle = 13.30 (14)°]. In the crystal, the molecules are linked by N—H⋯N hydrogen bonds to result in chains. Related literature The author thanks Beihua University for supporting this work. S2. Experimental 1,10-Phenanthroline-5,6-dione (1.5 mmol) and benzaldehyde (1.5 mmol) were dissolved in CH 3 COOHCH 3 COONH 4 (1:1) solution (30 ml). The mixture was refluxed for 3 h under argon, after cooling, this mixture was diluted with water and neutralized with concentrated aqueous ammonia, immediately resulting a yellow precipitate, which was washed with water, acetone and diethyl ether respectively. Crystals of the title compound were obtained by recrystallization from dichloromethane. S3. Refinement C-and N-bound H atoms were positioned geometrically (N-H = 0.86 Å and C-H= 0.93 Å) and refined as riding, with U iso (H)= 1.2 U eq (carrier). The structure of (I), showing the atomic numbering scheme. Displacement ellipsoids are drawn at the 30% probability level. Data collection Bruker APEX CCD area-detector diffractometer Radiation source: fine-focus sealed tube Graphite monochromator φ and ω scans Absorption correction: multi-scan (SAINT; Bruker, 1998) T min = 0.981, T max = 0.982 23942 measured reflections 5721 independent reflections 2627 reflections with I > 2σ(I) Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq C1
2014-10-01T00:00:00.000Z
2008-09-20T00:00:00.000
{ "year": 2008, "sha1": "5b82051ebc2e68ed14518c901e12033f9ebb6c4f", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2008/10/00/bt2790/bt2790.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9d3c2fda560c66a901621bb809c12bf624ab2dda", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
253420280
pes2o/s2orc
v3-fos-license
Calibration of Troitsk nu-mass detector readout electronics by signal digital filters We present the results of tuning and calibration of the detector electronics in the signal digitization mode. The goal of the experiment is to search for a possible sterile neutrino signature in tritium beta-decay. The read-out electronics work in direct oscilloscope mode, which requires to optimize time frame the with the goal to minimize noise and energy resolution. We use a 7-pixel silicon drift detector (SDD) and a CMOS charge sensitive preamplifier with very low integration capacitor. Amplifier forms a slowly rising output shape and operates in pulse-reset mode. The 125 MHz ADC digitizes the signals. Using calibration data from Fe55 and Am241 gamma sources we check triangular and trapezoid digital filters to obtain the best noise and energy resolution performance. We are also examining the option to differentiate the output signal. 1. Introduction. Nowadays many physics experiments rely on high tech achievements and solutions in detector and electronics performance. In the "Troitsk nu-mass" setup we use a 7-pixel silicon drift detector (SDD) [1] with direct digitization electronics. The goal of "Troitsk nu-mass" is to search for a new hypothetical particle -sterile neutrino [2]. If this particle exists, it could close a long-standing problem of the particle Standard Model: where are the right-handed neutrinos, could sterile neutrinos be such particles? In our experiment, we make precise measurements of the electron energy spectrum from the tritium beta-decay in attempt to find a change in the spectrum shape. The "Troitsk nu-mass" setup consists of a gaseous tritium source and an electrostatic spectrometer [3] with a silicon drift detector for electron registration. In our setup the signals from the detector are continuosly digitized, and the task is to perform triggering, fast signal sampling anf filtering. The details of the detector readout are the subject of this work. We describe several digital filter applications that allow us to decrease noise, trigger threshold and to improve off-line amplitude resolution. 2. Detector and electronics. As mentioned, we use a silicon drift detector (SDD) with 7 pixels 2 mm each for detection of electrons with energies up to 20 keV. The major advantage of such a detector is very a low anode capacitance, less than 100 fF. The detector was manufactured at the Semiconductor Laboratory of the Max Planck Society (HLL) [1]. Each anode is bonded to a charge-sensitive amplifier, CUBE [4], which operates in pulse-reset mode, with a capacitor value of 20 fF in the signal feedback loop. CUBE is a CMOS preamplifier for radiation detectors. The read-out system was manufactured by XGLab -Bruker Nano Analytics [4]. It is an 8-channel CUBE Bias Board, XGL-CBB-8CH. Similar detectors were already tested at the "Troitsk nu-mass" spectrometer in 2017-2019 [5] and demonstrated excellent energy resolution, less than 200 eV at full width at half maximum (FWHM). Continues signal digitization is provided by a 16-channel TQDC module in the VME case designed in the Joint Institute for Nuclear Research, JINR, Dubna, for the NICA project [6]. The TQDC module allows one to operate both in continuos digitization mode and with FPGA sub-processing. Each TQDC channel has 12 bit ADC at a sampling rate of 125 MHz. Here, as mentioned, we discuss the applicability in continuous mode and the usage of an external reset mode for front-end electronics. The amplifier output signal is continuously linearly rising from -0.5 V to +0.5 V because of the unavoidable detector leak current and is reset by external signal with time period of 100-200 microseconds. In the case of a particle signal, its charge integrates and produces a step-like change in the linear rising output signal, Fig. 1. A D C , a . u . T i m e , n s I n p u t F i l t e r e d Figure 1: Linear rising output signal with a step-like signature from the registered particle. Dotted line is the result of moving average oven 16 samples. Signal sampling and filtering. 3.1 Trigger threshold. During actual physics measurements of the tritium energy beta-spectrum, one of the critical parameters is the electronics threshold applied. The lower this value, the more precise the data. To optimize the performance of the read-out system, we should use a digital moving average threshold. For this purpose, we can apply, for example, the simplest linear transformation or moving averaging like: where a i is moving average, We collect a set of calibration data with F e 55 and Am 241 isotopes in direct digitization mode with a ADC time step of 8 nsec. Dotted line in Fig. 1 illustrates how the moving average works. For each trigger time window is set so that the real signal step is in the middle of the frame. Then, we apply the ADC averaging over some number of time bins, m. One can see that the noise fluctuations are significantly suppressed; some signal oscillations around the rising edge have almost completely diappeared. The signal amplitude restoration is significantly improved. The above example could be used to set the threshold if it were a "normal" signal over a constant bias value. To perform a fine-tuning of our digital threshold over a continuously rising signal we have to do a slightly different transformation, namely, to apply the well-known triangular digital filter, a i =Ĥ ·x, whereĤ = (−m, m) denotes some filter coefficients. For the triangular filter it sets "-1" and "+1" repeated m-times each. This filter sums over the bin interval size m, then subtracts the sum over the previous m bins, and normalizes the difference by m. As soon as the filter slides over a continuously rising signal with a constant slope, the output of the filter would be a constant. Approaching the step region, it finds the step and gives a triangular shaped digital signal with a rising and falling side equal to m. Automatically, the noise will be suppressed at some level. In the case of a trapezoidal filter,Ĥ = (−m, l, m) , where l denotes some number or zeros between subtracting intervals and defines the width of the trapezoid top. Fig. 2 demonstrates the result of application of the triangular and trapezoid filters with m=16 bins and l=16 bins to the real data. Pay attention, that for a continuously rising signal the digital "zero energy" or noise line changes, depending on the parameters m and l. To tune the digital threshold to its optimal value, we did a scan of the noise amplitude with a triangular filter for different m. In each case, we fit the noise width by the Gaussian function. Fig. 3 show dependence of the noise level in keV for different triangular filter parameter m. The noise sharply drops at m > 8 bins and then almost saturates at m >16. For m=16 we get noise level at 3σ as 0.27 keV, which estimates a possible digital threshold . We checked the noise dependence for the trapezoidal filter for different l at m= 16. We don't get any improvement in the noise level by increasing the parameter l from 0 to 24 bins. Thus, we conclude that triangular filter works well for noise suppression. We also use a configuration when a continuously rising signal is differentiated by a decoupled 1 nF capacitor. In this case, for trigger selection we can use a standard moving average filter with a normalized sum over some number of bins. Fig. 4 demonstrates the dependence of the noise level on the sample width. The best value is worse by about 20% relative to the triangular filter in Fig. 3. Optimization of energy resolution For each trigger, we need to optimize the length of the time window (sample width) with a goal to keep it short at a reasonable energy resolution. We collect calibration data with a F e 55 gamma source. The source has an intense gamma line at 5.89 keV and a smaller one at 6.49 keV. The time frame was set to be wide, about two microseconds with a step-like signal in the middle. We apply triangular and trapezoidal filters to restore the signal amplitude. Fig. 5 shows the energy resolution after applying the triangular filter. Again, one can see that the energy resolution saturates at m above 16. The energy spectrum of F e 55 source is shown in Fig. 6 at m=16. It is appropriate to mention that the detector energy resolution σ=0.11 keV is worse the best value for SDD detector of FWHM = 150 eV (or σ about 0.062 keV) as published in [7] . However, that value was obtained at SDD -20C temperature , and we also know from our experience [5] that this detector has a larger leakage current compared to a similar detectors we used before. We also take data with the Am 241 gamma source. Fig. 7 shows the spectrum reconstructed with a triangular filter at m=16. One can see prompt 13-18 keV lines and a low intensity line from the metastable decay at 59.54 keV. Applying trapezoidal filter at m=16 and at different l, within the error the energy resolution, has no improvement compared to the triangular (or l=0) filter result. Thus, we conclude that the usage of a triangular filter at m=16 bins will satisfy our needs for threshold selection and energy resolution. The minimal time window for the ADC sampling should be = 16 + 16 · 2 = 48 bins. Energy resolution for a differentiated signal The elementary particle releases in the silicon detector some portion of charge, which forms the output signal. In the current detector, charge integration requires about 50 nsec or a maximum of 80 nsec. This time defines the signal rise time from the bias board. In some applications, it is more convenient to differentiate a linear rising output from the bias board to get the "normal", not liear rising, signal. The charge collection time fluctuates between events, any passive filter, like a differentiation in our case, will distort the amplitude, and a "ballistic deficit" occurs [8], which is the loss of output signal amplitude due to the interplay between the finite charge-collection times in a detector and the characteristic time constants of the electronics. In Fig. 8 we show the digitized signal from F e 55 with a dotted line. The shape was averaged over many events. To get the best energy resolution and to minimize the ballistic deficit we try a few methods. By taking just maximum in the frame, we get the energy resolution of σ=187.1 eV. Another approach is to integrate in the certain neighbourhood of the waveform maximum. The ADC sum from the maximum by -8 bins to the left and 30 bins to the right gives σ = 132 eV. Varying the ranges to the left and to the right from the maximum, we find the best resolution of 116.6 eV for -3 samples on the left and 19 samples on the right (combination of -3,19). Another method is to introduce weights for the ADC values. We check a simple one, where the weight is proportional to the value itself, that is, we calculate x i · * x i / x i . By taking the whole rise time range, -8 bins, and 30 bins to the right from the maximum, we get a resolution of 126 eV. Varying the range, the best resolution is again for combination (-3, 19) and σ = 116.4 eV, which is the same as for a simple sum in this range, and about 10% worse than for a trianglular filter on liner rising signal, Fig. 5. In case of the differentiated signal, the minimal time window for the ADC sampling should also be around 50 bins. Conclusions. We investigate the application of digital filters for "Troitsk NuMass" signal processing. A silicon drift detector with 7 pixels 2 mm each is used for electron and gamma detection in the energy range under 20 keV. The goal was to optimize the trigger parameters and the time window for digitization at a clock frequebcy of 125 MHz in direct ADC oscilloscope mode. Calibration data with F e 55 and Am 241 gamma isotopes were analyzed by triangular and trapezoid digital filters. The triangular filter with integration over 16 ADC samples gives the optimal noise and energy resolution. The best energy resolution for a gamma line of 5.9 keV is 110 eV (sigma). Application of a trapezoidal filter with a flat top up to 24 bins (or 200 nsec) does not improve this result. It was concluded that the triangular filter with an integration range of 16 bins will satisfy our needs for trigger selection and energy resolution. We also checked the detector performance with a differentiated output signal. The simple moving average filter minimizes the noise level at integration over 12-16 samples. A few methods were tested to get the best energy resolution. We found that a simple integration over a rather narrow interval around the ADC maximum gives an energy resolution of about 116 eV for differentiated signal. The minimal time window for the ADC sampling should be around 50 bins.
2022-11-10T06:42:39.546Z
2022-11-09T00:00:00.000
{ "year": 2022, "sha1": "813b58450ade052bf0e338ef19eb6b0f2baf8281", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "813b58450ade052bf0e338ef19eb6b0f2baf8281", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14867038
pes2o/s2orc
v3-fos-license
Catalytic and Functional Roles of Conserved Amino Acids in the SET Domain of the S. cerevisiae Lysine Methyltransferase Set1 In S. cerevisiae, the lysine methyltransferase Set1 is a member of the multiprotein complex COMPASS. Set1 catalyzes mono-, di- and trimethylation of the fourth residue, lysine 4, of histone H3 using methyl groups from S-adenosylmethionine, and requires a subset of COMPASS proteins for this activity. The methylation activity of COMPASS regulates gene expression and chromosome segregation in vivo. To improve understanding of the catalytic mechanism of Set1, single amino acid substitutions were made within the SET domain. These Set1 mutants were evaluated in vivo by determining the levels of K4-methylated H3, assaying the strength of gene silencing at the rDNA and using a genetic assessment of kinetochore function as a proxy for defects in Dam1 methylation. The findings indicate that no single conserved active site base is required for H3K4 methylation by Set1. Instead, our data suggest that a number of aromatic residues in the SET domain contribute to the formation of an active site that facilitates substrate binding and dictates product specificity. Further, the results suggest that the attributes of Set1 required for trimethylation of histone H3 are those required for Pol II gene silencing at the rDNA and kinetochore function. Introduction Eukaryotic DNA is assembled into higher-order chromatin structures that promote compaction and protection of DNA. The structure of chromatin is dynamic to provide access to the underlying DNA template for nuclear processes, such as transcription and replication, and is controlled by several mechanisms [1]. Although the mechanisms of chromatin regulation by methylated histones are not as well understood as those governed by acetylated histones, a large body of work supports roles for methylated histones in the regulation of euchromatin and heterochromatin [2,3,4,5]. Lysine-methylated histones can have diverse effects on transcription, ranging from regulation of Pol II initiation and elongation to the formation and maintenance of repressive heterochromatin [5,6]. Histone methylation can be more complex than other covalent modifications because multiple methyl groups can be present at the same lysine residue that may alter the function of chromatin in different ways [2,7]. Moreover, regulatory proteins can discriminate the different methylated forms of a histone, providing means to increase the types of signals presented by chromatin [8]. Most lysine methyltransferases (KMTases) contain a SET domain of ,130 amino acids that is responsible for the catalysis of methyl group transfer from S-adenosylmethionine (AdoMet) to specific lysine residues within histone tails and other substrates [9,10]. The SET domain has four conserved sequence motifs ( Fig. 1) that support the methyl transfer reaction [11]. SET motif I contains a GxG motif that along with amino acid residues in SET motifs III (RFINHxCxPN) and IV (ELxFDY) interacts with the methyl donor AdoMet [12]. SET motif II has the amino acid sequence YxG with a tyrosine residue (Y) that has been hypothesized to act as an active site base in the SET domain methyl-transfer mechanism [13,14]. In addition, SET motifs III and IV interact with each other forming a pseudoknot that contains the active site adjacent to the AdoMet and the target lysine binding sites [13,14,15]. SET domain KMTases vary with respect to product specificity, defined as the ability to transfer one, two, or three methyl groups to the target lysine [16]. These enzymes also vary in their ability to act independently or as a member of a multiprotein complex. Set7/9 is a human monomethyltransferase that acts independently to transfer one methyl group to lysine 4 of histone 3 (H3K4) [13,17,18]. Likewise, Dim5, a trimethyltransferase from N. crassa, acts independently to methylate H3K9 [14,19]. Conversely, the KMTases Set1 from S. cerevisiae and MLL1 from H. sapiens catalyze mono-, di-and trimethylation of H3K4, and each functions as a member of a multiprotein complex [20,21,22,23,24]. Human SET domain KMTases Set7/9 and MLL1 have been the focus of structural studies. Crystal structures of Set7/9 have identified residues that contact the substrates AdoMet and the target lysine [13,17,18]. This structural information has been analyzed using molecular dynamics, hybrid quantum mechanics, molecular mechanics, and free-energy simulations to gain insights into the mechanism of Set7/9 [25,26,27]. Mixed lineage leukemia protein-1 (MLL1) is named such because chromosomal translocations involving the MLL1 gene are associated with acute lymphoblastic and myelogenous leukemias [28,29]. In vivo, the MLL1 multiprotein complex acts as a histone H3 trimethyltransferase [29,30,31] that regulates transcription [32,33]. Recombinant expression and purification of MLL1 has allowed for analysis of its KMTase activity independently and as part of a minimal core complex. When assayed independently, MLL1 is a slow monomethyltransferase, but in the presence of a core complex of proteins, including Ash2L and RbBp5, MLL1 is a higher-order KMTase, indicating that other MLL1 complex members influence product specificity [20,21,23,34,35]. There has been some debate regarding the mechanism SETdomain proteins use to catalyze methyl transfers to lysine side chains [36]. In order for the S N 2 methyl transfer reaction to occur, the target e-amino group of lysine must be deprotonated. Two mechanisms for lysine deprotonation have been proposed, one involving deprotonation by an active site base and the second requiring deprotonation via an active site water channel. Early studies on Set7/9 concluded that a conserved tyrosine residue in the SET domain behaves as an active site base to facilitate deprotonation of the target e-amino group of lysine. However, two different tyrosines were identified as the potential active site base, one from the YxG SET motif II (Y = Tyr245) [17] and the other from the ELxFDY SET motif IV (Y = Tyr335) [17,26]. In support of the proposed active site base mechanism, a structural study with Dim-5 showed that Tyr178 (equivalent to Tyr245 in Set7/9) interacts with the substrate lysine in a manner that would facilitate its deprotonation while a deprotonated Tyr283 (equivalent to Tyr335 in Set7/9) could stabilize the positively charged AdoMet [14]. However, as detailed below, recent work supports deprotonation via an active site water channel. The purpose of a water channel in the SET domain active site is two-fold: (1) to promote hydrogen bonding within the active site to position important residues and substrates in the methyl transfer reaction and (2) to function as a proton acceptor for deprotonation of lysine [25,37,38]. In early modeling studies, when Tyr245 of Set7/9 was substituted with phenylalanine, the e-amino group of the lysine substrate became exposed to a water channel [25], a result that is in agreement with other work showing that the Set7/ 9 Y245F mutant could catalyze higher-order methylation [13]. These modeling studies suggested that Tyr245 forms hydrogen- bonding interactions with a water channel and established the presence of water molecules within the active site of Set7/9 [13,25]. Modeling studies with Set7/9 also indicated that the hydroxyl group of Tyr335, the conserved tyrosine residue in SET motif IV, has a higher calculated pK a than the e-amino group of the lysine substrate containing zero, one, or two methyl groups, making it unlikely that Tyr335 would behave as an active site base [25,38]. Recently, a crystallographic study with Set7/9 mutants using peptides bearing zero, one, two, and three methyl groups on the e-amino group of the target lysine has provided insight into the role of water molecules in modulating multiple methylation events [37]. This study concluded that a water channel within the active site accepts the dissociated proton from the lysine substrate. Therefore, the active-site residues that form the access channel for the target lysine, including the conserved tyrosine residues in SET motifs II and IV, facilitate substrate binding and methyl transfer by creating a distinct volume that discriminates the methylation state of the substrate and thus governs product specificity. The S. cerevisiae SET1 gene encodes Set1, a member of the COMPASS complex that catalyzes methylation of lysine residues in histone H3 and the kinetochore-associated protein Dam1 [22,39,40,41]. Set1 and K4-methylated H3 are required for silencing of Pol II transcription in the ribosomal DNA locus (rDNA) and at telomeres in S. cerevisiae [42,43,44,45,46]. Studies to identify catalytically important residues in the active site of Set1 have been few in number, most likely due to the inability to prepare active Set1 protein for in vitro structural and mechanistic studies. To overcome this limitation, we performed a mutational analysis of the SET domain of Set1 to gain insight into the mechanism of methyl transfer. Single amino acid substitution mutants of Set1 were characterized using in vivo assays, including histone H3 methylation and transcriptional silencing at the rDNA. In addition, a genetic suppression assay was used to indirectly assess methylation of the kinetochore protein Dam1 to gain insight into the possible role of conserved amino acid residues in methylation of a non-histone substrate [41]. Analysis of Set1 activity in S. cerevisiae cells is possible because there is only one H3K4 methyltransferase in yeast [39], compared to at least ten in mammalian cells [2]. Moreover, Set1 is a member of the COMPASS complex that is capable of catalyzing three methylation states at K4 of histone H3, and thus the analysis of mutants provides a way to determine how amino acid substitutions affect product specificity. The results provide insight into the mechanism of methyl transfer by Set1 and information on the role of higherorder H3K4 methylation in silent chromatin and kinetochore function. Media Yeast media used in these experiments have been described elsewhere [47]. YPADT is YPD media supplemented with 40 mg/ L of adenine sulfate and 20 mg/L of L-tryptophan. Plasmids Plasmid MBB251 contains an XhoI-SacII fragment with the SET1 ORF flanked by 436 bp upstream and 347 bp downstream. MBB251 was modified by the addition of a Cla1 site 9 bp downstream of the SET1 stop codon to make MBB484. The XhoI-SacII fragment of MBB484 was ligated into pRS406 to create MBB491, a vector for integration of SET1 sequences into ura3-52. To create a mutagenesis shuttle vector, the pBluescript plasmid (Stratagene) was modified by cloning a linker containing a MunI restriction site into the unique EcoRI site to create plasmid MBB479. Next, the MunI-ClaI fragment of SET1 from MBB484 was ligated into MBB479 creating MBB487. Mutagenesis SET1 mutants were generated by site-directed mutagenesis of MBB487 with Phusion polymerase (New England Biolabs) and specific primers (Table S2). Mutations were verified by DNA sequence analysis. The MunI-ClaI fragment from each mutated plasmid was ligated into the MunI-ClaI interval of MBB491. Yeast Strains All S. cerevisiae strains are listed in Table S1. Yeast strains were generated by standard genetic crosses and transformation techniques. Wild-type and mutant alleles of SET1 in the vector MBB491 and the empty vector itself were digested with StuI and transformed into MBY2269 and MBY2450 (ZK2 Dset1, a gift from Sharon Dent). Integration of a single copy of MBB491 with wildtype SET1 or derivatives containing mutant alleles of SET1 into the ura3-52 locus was verified by PCR and Southern blot analysis. The wild type and mutant SET1 alleles integrated at ura3-52 in MBY2269 derivatives were amplified from genomic DNA and each mutation was verified by DNA sequencing. MBY1198 with the SET1 gene at its endogenous location, MBY2269 and derivatives containing wild-type or mutant alleles of SET1 integrated at ura3-52 were used in Western analyses and Northern analyses. Yeast strain MBY2450 (ZK2 Dset1) and its derivatives containing wild-type or mutant alleles of SET1 were used in the ipl1-2 growth experiments. To measure steady-state levels of Set1 protein, 150 mg of whole cell extract were resolved on 7% SDS-PAGE gels and transferred to PVDF membranes. Blots were incubated with anti2Set1 (sc-25945, Santa Cruz Biotechnology; 1:1000), washed and then incubated with HRP-conjugated anti-goat secondary antibodies (sc-2020, Santa Cruz Biotechnology; 1:1000). After washing, blots were developed and imaged as described above, and then stained with Ponceau S to visualize total protein and serve as a loading control [49]. Northern Blot Analysis Isolation of total RNA and Northern blotting were performed as described previously [42,50]. Strand-specific 32 P -labeled RNA probes or DNA probes were used to detect Ty1his3AI and PYK1 mRNAs [51]. Northern blots were imaged with a Pharos FX Plus Molecular Imager and quantified using Quantity One software (Bio-Rad). Plate Growth Assay Using ipl1-2 Strains Strains containing ipl1-2 were grown in 5 mL of YPADT to saturation at 25uC. Ten-fold serial dilutions were made in sterile water and equal volumes (4 ml) of each dilution were spotted onto three YPADT agarose plates. Plates were incubated at 25uC, 30uC or 37uC for 1-2 days prior to imaging. Identification of Conserved Residues within the SET Domain of Set1 The Set1 sequence was aligned with other KMTases to identify the four conserved SET domain motifs and the locations of conserved amino acids that may play an important role in protein methylation. Based on the sequence alignment and structural data from Set7/9, DIM-5 and MLL1 [11,13,14,20], point mutations were made at one or more residues in each of the conserved motifs in Set1 (Fig. 1). Expression of Set1 Mutants Each SET1 mutant allele was integrated into the ura3-52 locus of a S. cerevisiae strain lacking the endogenous SET1 gene (Table S1). A strain with the SET1 gene at its endogenous locus as well as strains either lacking SET1 or containing a wild-type copy of SET1 integrated at the ura3-52 locus were analyzed as controls. To verify that cells with the wild type or a mutant SET1 gene integrated at ura3-52 express stable Set1 protein, Western blotting assays were performed using whole cell extracts and antibodies specific for Set1 (Fig. 2). The results show that the steady-state level of Set1 protein was similar in protein extracts from wild-type cells (SET1 + and SET1 + ::ura3-52) and each of the sixteen amino acid substitution mutants. In contrast, background signal was detected in extracts from set1D cells that lack Set1 protein. Steady-state Levels of K4-methylated Histone H3 in the Set1 Mutants Western blotting experiments using whole cell protein extracts were conducted to assess the ability of the Set1 mutants to methylate K4 of histone H3 (Fig. 3, Table 1). Specific antisera were used to detect the steady-state level of K4-monomethylated histone H3 (H3K4me1), K4-dimethylated histone H3 (H3K4me2) For details of classification, see text; partial func/null, partial function with phenotypes more similar to set1D cells; partial func/silent, partial function with phenotypes more similar to Set1 + cells. b Amino acid substitution in Set1. c Average levels +/2 range of H3K4me1, H3K4me2 and H3K4me3 measured in whole cell extracts from Set1 mutants by quantitative western blotting (n = 2); values are normalized to the levels measured in whole cell extracts from a wild type Set1 + strain. No value for +/2 range is given if the two measurements were identical. See Figure 3 and text for details. d Pol II gene silencing at the rDNA was assessed by measuring the level of Ty1his3AI mRNA/PYK1 mRNA in total RNA relative to that from a Set1 + strain where the ratio of Ty1his3AI/PYK1 was set to 1. The values shown are average +/2 SD for n repeats. e Suppression of ipl1-2 growth defects at the restrictive temperature (30uC) was measured in growth assays described in the text and shown in Figure 5. Yes, growth at 30uC; No, severely reduced growth at 30uC; partial-yes, slightly reduced growth at 30uC; partial-no, intermediate reduction of growth at 30uC. doi:10.1371/journal.pone.0057974.t001 or K4-trimethylated histone H3 (H3K4me3). Control extracts from set1D cells lacked detectable K4-methylated H3, verifying that Set1 is the only K4-specific histone H3 methyltransferase in S. cerevisiae [39]. A range of levels of the three forms of K4-methylated H3 was detected in whole cell extracts from the Set1 mutants. Classification of some mutants was clear based on the steady-state levels of K4-methylated H3. For example, certain Set1 SET domain mutants behaved like set1 null mutants with extremely low or undetectable levels of the three forms of K4-methylated H3 (Y967A, N1016A, H1017L, H1017R, Y1054A and F1056A). Other Set1 mutants had steady-state levels of K4-methylated H3 that were variable with one or more forms being higher than the levels measured in extracts from wild type Set1 + cells (R1013H, H1017A, Y1052F, Y1052A, Y1052V and F1056Y). Three of the mutants (G951A, Y967F and Y993A) had levels of H3K4me1 at ,50-70% of wild type but the levels of H3K4me2 and H3K4me3 were 6% of wild type or lower. In the remaining mutant Y1054F, the levels of the three forms of K4-methylated H3 were less than 50% of wild type. To learn more about the function of these Set1 mutants in gene silencing at the rDNA and kinetochore function, additional assays of Set1 function were performed. Gene Silencing Activity of Set1 Mutants Previous work has shown that Pol II-dependent expression of a Ty1his3AI element in the rDNA is repressed by rDNA-specific silent chromatin that requires Set1 [39,42,50]. The steady-state level of mRNA from the Ty1his3AI element in the rDNA was measured in total RNA from cells lacking Set1 (set1D), expressing wild-type Set1 or expressing one of the Set1 mutants, to evaluate the ability of the Set1 mutants to support rDNA silencing of Pol II transcription. PYK1 mRNA was measured to normalize the amount of RNA loaded in each lane. Representative results are shown in Figure 4. The mean value of the ratio of Ty1his3AI mRNA to PYK1 mRNA with standard deviation for each Set1 mutant normalized to the ratio for cells expressing wild-type Set1 are given in Table 1. In set1D cells, the steady-state level of Ty1his3AI transcript was increased 5.4-fold (Fig. 4, Table 1) compared to wild type Set1 + cells, consistent with the loss of rDNA silencing of Pol II transcription [39,42]. For most of the Set1 mutants, the ability of a mutant Set1 protein to maintain Pol II gene silencing at the rDNA correlated directly with its ability to methylate histone H3 in vivo (Figs. 3 and 4; Table 1). Several mutants (Y967A, N1016A, H1017L, H1017R, Y1054A, and F1056A) with low or undetectable levels of all three forms of K4-methylated histone H3 exhibited defects in Pol II gene silencing, as expected. Most Set1 mutants with levels of one or more forms of K4methylated H3 greater than wild type (H1017A, Y1052F, Y1052A, Y1052V and F1056Y) retained the ability to silence the Ty1his3AI gene in the rDNA. In these mutants, the level of Ty1his3AI mRNA was similar to the level observed in wild type Set1 + cells (Table 1). However, the silencing phenotype of the R1013H mutant separated it from others in this group. Despite having a level of H3K4me1 that was higher than that in wild-type cells, the R1013H mutant was defective for gene silencing at the rDNA with a steady-state level of Ty1his3AI mRNA that was 2.8-fold higher than the level in wild type Set1 + cells. Notably, the R1013H mutant had a considerably reduced level of H3K4me3 (2% of wild type) compared to the H1017A, Y1052F, Y1052A, Y1052V and F1056Y mutants. The results from the four remaining Set1 mutants suggest a correlation between silencing and H3K4me3. Specifically, the Set1 mutants G951A and Y993A were defective for Pol II gene silencing at the rDNA with average ratios of Ty1his3AI/PYK1 mRNA that were increased approximately eight-fold compared to wild type Set1 + cells ( Fig. 4; Table 1). In contrast, the Y967F mutant had an average ratio of Ty1his3AI/PYK1 mRNA that was two-fold higher than the wild-type strain but the difference was not statistically significant. Given the similarities in the levels of H3K4me1 and H3K4me2 in these three mutants ( Fig. 3 and Table 1), one possibility is that the low level of H3K4me3 detected in the Y967F mutant (4% of wild type) is sufficient to support rDNA silencing, while the lower levels of H3K4me3 in G951A and Y993A mutants (undetectable) are not. The Set1 mutant Y1054F also exhibited defects in gene silencing at the rDNA, though less severe than other mutants (1.7 fold increase in Ty1his3AI/PYK1). It is interesting to note that, like the R1013H mutant with a less severe rDNA-silencing defect (2.8-fold compared to WT), a low level of H3K4me3 was also detected in extracts from the Y1054F mutant ( Fig. 3; Table 1). The levels of H3K4me3 in these mutants (Y967F, 4%, R1013H, 2% and Y1054F, 3%) suggest that a relatively low steady-state level of H3K4me3 is sufficient to support Pol II gene silencing at the rDNA. Suppression of the ipl1-2 Mutation by the Set1 Mutants as a Proxy for Methylation of a Non-histone Substrate, Dam1 Dam1, a member of the Dam1 complex that connects microtubules to the kinetochore and promotes proper chromosome segregation [52,53,54], is a non-histone substrate of Set1 [41]. A balance between methylation of Dam1 by Set1 and phosphorylation of Dam1 by the Aurora kinase Ipl1 is required for proper chromosome segregation and cell viability [41,55,56]. Deletion of SET1 suppresses the temperature-sensitive growth defect of cells carrying a conditional allele of the Aurora kinase, ipl1-2. Previous work has shown that this suppression is due to reduced methylation of Dam1 in the absence of Set1 [41]. We tested the ability of the Set1 mutants to suppress the growth defects of ipl1-2 cells at a restrictive temperature as an indirect test of their ability to methylate Dam1. Individual alleles encoding the Set1 amino acid substitution mutants were introduced into cells carrying an ipl1-2 mutation, and the ability of the Set1 mutants to suppress the ipl1-2 growth defect at a restrictive temperature (30uC) was tested (Fig. 5). Cells with a wild-type copy of the SET1 gene (ipl1-2 SET1 + ) and those lacking SET1 (ipl1-2 set1D) grew equally well at the permissive temperature (25uC). However, the ipl1-2 SET1 + cells grew poorly at the restrictive temperature, 30uC. In contrast, the ipl1-2 set1D cells grew well, presumably because methylation of Dam1, and inappropriate chromosome attachment, which reduces cell viability, does not occur in the set1D cells at the restrictive temperature. Figure 2. Cells containing wild-type or mutant alleles of SET1 at ura3-52 express similar steady-state levels of Set1 protein. Whole cell extracts (150 mg) from wild type Set1 + cells (with the SET1 gene at its endogenous location, SET1+, or integrated at the ura3-52 locus, SET1+::ura3-52), Set1 deletion cells (set1D) and Set1 mutants (indicated above each panel) were separated and transferred to PVDF membranes. For each blot, the upper panel is an immunoblot showing the steady-state level of Set1 protein in whole cell extracts; the lower panel is the same membrane stained with Ponceau S to verify equal loading of protein extracts. The dark band below the Set1 band in the immunoblots is a non-specific band. Representative data are shown (n$3). doi:10.1371/journal.pone.0057974.g002 The results of growth assays measuring suppression of the ipl1-2 conditional allele by the Set1 mutants are shown in Figure 5 and summarized in Table 1. Several mutants, Y967A, N1016A, H1017L, H1017R, Y1054A, and F1056A, with reduced or undetectable levels of K4-methylated histone H3, suppressed the growth defect of ipl1-2 cells at 30uC, hinting that these Set1 mutants are likely to have lost the ability to methylate Dam1. In contrast, with the exception of the R1013H mutant, the Set1 mutants with levels of one or more forms of K4-methylated H3 greater than wild type (H1017A, Y1052A, Y1052F, Y1052V and F1056Y) did not suppress the ipl1-2 growth defect at 30uC, suggesting that these Set1 mutants are likely to have retained the ability to methylate Dam1. The remaining Set1 mutants, G951A, Y967F, Y993A, R1013H and Y1054F, displayed partial suppression phenotypes, as evidenced by growth at the restrictive temperature that was intermediate between that of the SET1 + and set1D cultures. Based on the results shown in Figures 2, 3, 4, and 5, the Set1 mutants were classified based on their ability to methylate histone H3 and perform other functions in vivo (Table 1). One class of mutants was categorized as 'null mutants' due to the absence or low levels of K4-methylated H3 in whole cell extracts, the inability to silence expression of the Ty1his3AI element in the rDNA and the ability to suppress ipl1-2 growth defects. The Set1 null mutants are Y967A, N1016A, H1017L, H1017R, Y1054A and F1056A. Because Set1 protein was expressed in these mutants (Fig. 2), we Figure 3. Quantitative Western blots measuring steady-state levels of K4-methylated histone H3 in cells expressing wild-type and mutant alleles of SET1. Representative Western blotting experiments are shown for cells carrying wild-type SET1 at its endogenous location, set1D and set1 amino acid substitution alleles. Dilutions of protein extracts (mg loaded indicated below the lower panel) from wild-type (WT), set1D, and Set1 mutant cells were analyzed by Western blotting with specific antibodies to measure the in vivo steady-state levels of K4-monomethylated H3 (amono), K4-dimethylated H3 (a-di), and K4-trimethylated H3 (a-tri). The level of histone H3 (a-H3) or Pgk1 (a-Pgk1) protein was used to normalize the amount of protein loaded in each lane. The average level of normalized K4-methylated H3 detected in Set1 mutant extracts relative to wild-type extracts is shown below each blot (n = 2). See Table 1 conclude that each of the amino acid substitutions abolish or significantly reduce methyl transfer activity in vivo without affecting the steady-state levels of the Set1 protein. A second class contains five Set1 mutants with phenotypes that are similar to wild type Set1 + cells. The Set1 mutants in the 'silent' class are H1017A, Y1052F, Y1052A, Y1052V and F1056Y. These Set1 mutants maintained the ability to silence the Ty1his3AI element at the rDNA and failed to suppress the growth defects in cells with an ipl1-2 mutation. Thus, these mutants are classified as silent with respect to rDNA silencing and suppression of the ipl1-2 phenotype. A high degree of variability in the steady-state levels of the three forms of K4-methylated H3 was detected in this class of mutants, a result that suggests that small (#3-fold) increases or decreases in methyl transfer activity do not interfere significantly with the function of Set1 in rDNA silencing and at the kinetochore. A third class of mutants, the partial function mutants, has five members (G951A, Y967F, Y993A, R1013H and Y1054F). These mutants share the characteristic that each partially suppresses the growth defects of cells carrying the ipl1-2 mutation at the restrictive temperature. Notably, these mutants exhibited broad differences in their steady-state levels of K4-methylated H3 and in their ability to silence Pol II transcription at the rDNA. Based on the severity of the rDNA-silencing phenotypes and ipl1-2 suppression, the partial function Set1 mutants were divided into two subclasses. The partial function/null mutants, G951A and Y993A, exhibited a strong loss of rDNA silencing (Fig. 4, Table 1) and an ipl1-2 suppression phenotype (Fig. 5) that was more similar to set1D cells than Set1 + cells. On the other hand, the partial function/silent mutants had phenotypes that were more similar to Set1 + cells (Table 1, Figs. 4 and 5). Implications of these data with respect to catalysis, rDNA silencing and suppression of the ipl1-2 mutation are discussed below. Discussion The covalent modification of proteins and the generation of the histone code play a central role in chromatin structure and function. In this study, we have analyzed several Set1 mutants to evaluate histone H3 methylation patterns and in vivo phenotypes associated with alteration or loss of Set1 function. Our results provide insights into the roles of specific residues of Set1 in catalysis of the three levels of H3K4 methylation. In addition, these mutants provide information to better understand the relationship between the methylation activity of Set1 and gene silencing at the rDNA locus of S. cerevisiae. Although structural data for Set1 do not exist at this time, sequence alignments comparing Set1 to other SET-domain proteins identified conserved residues with the potential to play important roles in methylation reactions. For this study, mutants were generated with amino acid substitutions in one of the four conserved motifs of the SET-domain in SET1. For each mutant, the steady-state level of the mutant Set1 protein was found to be comparable to the level of Set1 protein in cells expressing wildtype SET1 (Fig. 2). These results suggest that the amino acid substitutions do not affect the integrity of the Set1 protein. Therefore, the loss of methylation activity observed in these mutants is unlikely to be due to the loss of Set1 protein from the cell and therefore is likely to reflect changes in the methylation behavior of the Set1 mutant due to the substitution of a specific amino acid residue. Insights into the Catalytic Roles of Conserved Amino Acids in the SET Domain of Set1 Characterization of the methylation activity in five series of Set1 mutants, Y967F/A, H1017A/L/R, Y1052F/A/V, Y1054F/A, and F1056Y/A has provided information that distinguish between the two possible mechanisms of proton abstraction from the eamino group of the substrate lysine residue. As previously discussed, one mechanism proposes that deprotonation occurs due to an active site base and the second that deprotonation occurs via a water channel. Set1 residues Y967 located in SET motif II and Y1054 located in SET motif IV align with conserved tyrosine residues in other SET-domain proteins, including Y178 in Dim-5 (Y967 in Set1) [14], and Y245 and Y335 in Set7/9 (Y967 and Y1054 in Set1, respectively) [17,26] (Fig. 1). In the Dim-5 and Set7/9 structures, each of these tyrosine residues is in proximity to the e-amino group of the target lysine such that each could potentially function as a general base to abstract a proton [14,17,26]. Our results indicate that the Y967A and Y1054A Set1 mutants lack methylation . Northern blot analysis to evaluate gene silencing at the rDNA. Representative Northern blots are shown for cells carrying wildtype SET1, set1D and set1 amino acid substitution mutants. The source of total RNA is indicated above the top panel for each set of blots. Ty1his3AI, a Pol II-transcribed gene inserted in the rDNA locus, was used to assess gene silencing at the rDNA. PYK1 mRNA was used to normalize the amount of RNA loaded in each lane. See Table 1 for the normalized average +/2 standard deviation for each mutant. Each mutant was analyzed in 3 or more independent experiments. Total Ty1 mRNA was measured in each sample to verify that mutation of Set1 did not increase the level of total Ty1 mRNA in the cells (data not shown). doi:10.1371/journal.pone.0057974.g004 activity with H3K4 substrates. However, data from the Set1 mutants Y967F and Y1054F show that despite the loss of a hydroxyl group at this position, these mutants catalyze methylation reactions, albeit at reduced levels compared to wild type. The Set1 mutant Y967F has levels of H3K4me1 that are ,50% of wild type and greatly reduced levels of H3K4me2 (1%) and H3K4me3 (4%) (Fig. 3; Table 1). The Set1 mutant Y1054F catalyzes methylation reactions producing 45% H3K4me1, 28% Figure 5. Suppression of ipl1-2 by Set1 mutants. The ability of the Set1 amino acid substitution mutants to suppress the temperature sensitive growth phenotype of ipl1-2 mutants at 30uC was tested using plate growth assays. Dilutions (1:10) of cultures of ipl1-2 SET1 + , ipl1-2 set1D and ipl1-2 Set1 amino acid substitution mutants were spotted on YPADT plates and incubated at 25uC, 30uC and 37uC for 24-48 hours. The ability of the Set1 mutants to suppress the ipl1-2 growth defect at 30uC is summarized in Table 1 H3K4me2, and 3% H3K4me3 compared to wild-type Set1. The presence of K4-methylated H3 in cells with the Y967F and Y1054F Set1 mutants indicates that neither of these tyrosine residues is an essential active site base. Our data suggest that Y967 and Y1054 in Set1 facilitate higher order H3K4 methylation reactions. This idea, supported by our data, is in agreement with structural data on Set7/9. In Set7/9, residues Y245 and Y335 (Y967 and Y1054 in Set1, respectively) contribute to the formation of the lysine access channel that forms the active site [13], contribute to a hydrogen bond network with other active-site residues [13,25,26], interact with water molecules that play a role in orienting the e-amino group of the substrate lysine, and determine if multiple methyl transfer reactions can be catalyzed [27,37,57]. It has been proposed that mutating these conserved tyrosine residues in Set7/9 would cause the displacement of critical water molecules, thereby altering the active site in a way that would change product specificity [25]. Our results from the analysis of Set1 mutants at Y967 and Y1054 are in agreement with a model that these residues contribute to the formation of a hydrogen-bonding network that stabilizes the active site and promotes deprotonation to allow higher order methylation reactions. Whether these hydrogen bonds are with water molecules, other Set1 active site residues, or with the substrate lysine itself remains unknown. There are several conserved aromatic residues in SET domain proteins ( Fig. 1) that contribute to protein methylation. Two such residues in Set1 are Y1052 and F1056. To assess the role of Y1052, a series of mutants containing single amino acid substitutions were made, Y1052F, Y1052A, and Y1052V. This position in SET-domain proteins is referred to as the ''Phe/Tyr switch'' and has been shown to play a role in dictating product specificity [58]. According to the ''Phe/Tyr switch'' hypothesis, a tyrosine residue at this position prevents higher-order methylation explaining why tyrosine is usually found at this position in the SET-domain monomethyltransferases, whereas a phenylalanine residue at this position, which is the case in several higher-order methyltransferases, allows for multiple methyl additions. However, Set1, a higher-order methyltransferase, has a tyrosine residue at this location, Y1052, suggesting that this residue does not dictate product specificity in all SET domain proteins. Nonetheless, our results are in agreement with previous work showing that a Set1 Y1052F mutant produces greater trimethyltransferase activity [58]. Like S. cerevisiae Set1, the human MLL1 complex catalyzes H3K4-mono-, di-, and trimethylation despite having a tyrosine at the ''Phe/Tyr switch'' position. A mechanistic study indicated that purified MLL1 SET domain is a monomethyltransferase in the absence of other MLL1 complex members [21]. Studies using the MLL1 SET domain mutant Y3942F revealed that this amino acid substitution allowed the mutant protein to catalyze K4-mono-, diand trimethylation of an H3 peptide [21]. Our Set1 mutants at the corresponding position, Y1052F/A/V, all had higher steady-state levels of H3K4me3 and reduced levels of H3K4me1 compared to wild-type Set1. These results are in agreement with a model that the residue at the ''Phe/Tyr switch'' position contributes to product specificity. The other partially conserved aromatic residue F1056 in Set1 aligns with Y337 of Set7/9, a residue shown in crystal structures to form the active-site channel for the target lysine substrate [13,17]. The Set1 mutants F1056Y and F1056A were constructed to investigate the importance of an aromatic residue at this position in the active site. Set1 F1056Y had nearly wild-type levels of the three forms of K4-methylated H3, maintained the SET1 rDNA silencing phenotype and failed to suppress the conditional growth defect associated with the ipl1-2 allele (Table 1). In contrast, Set1 F1056A is a null mutant, lacking the ability to catalyze methylation of H3K4 and able to suppress the ipl1-2 growth defect (Table 1). Our results support a model that the aromatic residues Y1052 and F1056 play an essential role in formation of the active site channel allowing binding of the methyl acceptor, H3K4, and allowing multiple methylation events to occur at the target lysine e-amino group. A highly conserved histidine residue, H1017, is found within SET motif III of Set1 (Fig. 1). The corresponding residue H297 of Set7/9 was implicated in positioning of a conserved tyrosine residue (Y335 in Set7/9, Y1054 in Set1) required for the formation of the lysine access channel as well as participating in a hydrogen bond network [13,17,18]. A recent study with a H3907A MLL1 mutant, which corresponds to the H1017 position of Set1, has shown that this substitution mutant causes an AdoMet binding deficiency that was partially rescued when the Ash2L/ RbBp5 heterodimer is associated with the H3907A MLL1 mutant [34]. In our study, three different Set1 mutants were made at H1017, each containing a single amino acid substitution, H1017A/L/R. The Set1 mutant H1017A has a hyper-methylation phenotype, catalyzing the formation of H3K4me1, H3K4me2, and H3K4me3 at levels that are at least 50% higher than those observed in wild type Set1 + cells ( Table 1). The two other mutants, H1017R and H1017L, are null mutants despite the fact that the substituted residues maintain a similar van der Waals volume (Leu and Arg compared to His) and a positive charge (Arg compared to His). While no definitive conclusion about the role of H1017 can be made, our results with the H1017A mutant show that the size and charge of this residue can be altered without decreasing catalysis. With respect to providing insight into the catalytic mechanism of Set1, our results indicate that no single active site base is required for methylation of H3K4. Instead, the data suggest that the conserved aromatic residues are critical for forming an active site that facilitates productive substrate binding and dictates product specificity. Cumulatively, these results are consistent with Set1 having a water channel in the active site that allows for deprotonation of the target lysine side chain and accommodation of increasing numbers of methyl groups covalently attached to the lysine e-amino group. Insights into the Functional Roles of Set1 in vivo The Set1 mutants were divided into three classes based on the effect of the mutation on Set1's ability to catalyze methylation of H3K4, silence expression of a Pol II gene at the rDNA and to genetically interact with ipl1-2, a proxy for methylation of Dam1, a non-histone substrate ( Table 1). The null class of mutants lacked the ability to methylate histone H3 on K4. Based on these results, we conclude that the amino acid substitutions do not support the methylation activity of Set1. This conclusion is supported by our results showing that each null mutant suppressed the ipl1-2 conditional growth phenotype, a phenotype associated with a defect in methylation of Dam1 by Set1 (41). A second class of Set1 mutants had levels of K4-methylated H3 in whole cell extracts that were 50% or higher than the levels measured in extracts from wild type Set1 + cells. None of these mutants suppressed the growth defect of ipl1-2 cells, suggesting that Set1 activity was not limiting for proper kinetochore function in these mutants. The partial function class of Set1 mutants retained some methylation activity with histone H3. These Set1 mutants provide insight into the role of methylation in rDNA silencing. The Set1 'partial function mutants' suggest that H3K4me3 is required for silencing of Pol II transcription at the rDNA (Fig. 4, Table 1). The Set1 mutants G951A and Y993A, which have slightly reduced levels of H3K4me1 and low or no H3K4me2 and H3K4me3, are defective for rDNA silencing, suggesting that either H3K4me2 or H3K4me3 is required for rDNA silencing. The Set1 mutants R1013H and Y1054F have wild type or slightly reduced levels of H3K4me1 and H3K4me2 and greatly reduced levels of H3K4me3. These mutants are also defective for rDNA silencing, although the phenotypes are not as severe as in the null mutants that fail to produce H3K4me3. Taken together, the silencing phenotypes of these four mutants suggest that the level of H3K4me3 is correlated directly with Pol II silencing in the rDNA. Interestingly, the results from the Y967F mutant indicate that a level of H3K4me3 that is ,4% of that found in wild-type cells represses Pol II transcription at the rDNA in a manner that is not significantly different from that observed in wild type Set1 + cells (Y967F, Ty1his3AI mRNA/PYK1 mRNA, 2.0+/21.0, n = 6; Table 1). A requirement for H3K4me3 in rDNA silencing is consistent with a conclusion from previous work using Set1 mutants lacking portions of the N-terminus of the protein that resulted in H3K4me3-deficient cells that were defective in rDNA silencing [43]. However, this conclusion was based on the function of a truncated Set1 protein that could have altered COMPASS complex assembly [59]. Our data from the partial function Set1 mutants indicate that the level of H3K4me3 in a cell is related to silencing capacity of rDNA chromatin. Whether K4-trimethylated H3 is required directly at the rDNA locus or if it regulates silent chromatin indirectly remains unknown. Our results suggest that Set1 trimethylation activity, rDNA silencing, and ipl1-2 interaction are correlated, indicating that while Set1 promotes multiple types of methylation, these Set1 phenotypes relate specifically to its trimethylation activity. For example, analysis of the ability of the Set1 mutants to suppress the growth defect of cells with a conditional allele of S. cerevisiae gene encoding Aurora kinase, ipl1-2, has revealed structural aspects of the active site of Set1 that promote normal kinetochore function. The Set1 mutants that retained the ability to generate H3K4me3, even at levels below 10% of wild type, failed to suppress the growth defect of ipl1-2 cells (Figs. 3, 5 and Table 1). These results suggest that features of the Set1 active site that are required for trimethylation of histone H3 may also be required for methylation of Dam1. Efforts to express and purify recombinant Set1 from bacteria have been made in our lab to determine the methylation activity of Set1 in the absence of other COMPASS proteins. In addition, other members of COMPASS were expressed and purified to reconstitute a minimal COMPASS complex. However, no methyl transfer activity was detected in methylation reactions using bacterially expressed Set1 or a minimal complex containing Set1, Bre2 and Swd1 (data not shown). Recent work has shown that COMPASS can be reconstituted from recombinant proteins expressed in insect cells [24]. We expect that future in vitro studies using Set1 mutants in a reconstituted COMPASS complex will provide new insights into substrate binding, catalysis of methyl transfers and product release on both histone and non-histone substrates.
2016-05-12T22:15:10.714Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "c2b8a338f68e9ef70ca5d8e616ef16620e46b823", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057974&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2b8a338f68e9ef70ca5d8e616ef16620e46b823", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
234897597
pes2o/s2orc
v3-fos-license
Extracurricular activities of students - a means of forming professional motivation . The article analyzes the specificities of future teachers` extracurricular activities as a means of forming their professional motivation. The author proves that extracurricular work can be considered as a factor of generating students` primary interest to the future occupation. The variety and complexity of extracurricular activities allow to reveal the diversity of the world, the job, human relationships and lead to increased motivation for getting professional pedagogical knowledge. Introduction The political and social and economic processes that take place in the world nowadays are being carried out at a previously unprecedented speed.To ensure a decent standard of living for mankind, to preserve each state sovereignty, each person`s freedom and independence, any country and any society needs to keep up with the ongoing changes and adapt to them in an optimal way.Back in 2018, at the 14th major press conference the President of the Russian Federation expressed the idea that "we need to jump into a new technological order.Without this the country has no future.This is a fundamental question.… It is necessary to concentrate the available resources, find them and concentrate on the most important areas of development" [1][2][3][4][5][6][7][8][9][10][11][12].One of the most important resources that is necessary for the implementation of a technological breakthrough is a Man and his knowledge, competence, experience, relationships.The training of a professional specialist, capable of highly skilled creative working, is one of the main directions that require the concentration of general efforts, the President Putin spoke about.Therefore, the words of the Academician E.V. Bondarevskaya remain still burning: "Education is interaction with the future, since its subject is a growing and a developing person, whose spirituality, morality and business qualities will be fully revealed in a future independent life ..." [1, p. 18].As the future of each society and each state depends on each person (on the personal and professional ideals, values, relationships, qualities), the issues of professional training in the whole and professional training of future teachers in particular do not lose their extreme relevance. Our long-term practical experience in teacher-training education and the results of research activities prove that, unfortunately, the social and professional authority of the teaching jobs, lost in the 1990s, has never been restored [6].The education sector continues to be perceived as a service sector, but we categorically cannot agree with this definition [7], because such an approach excludes the upbringing component of education, focusing on learning as training and its results.Thus, upbringing and its components often remain beyond the area of university teachers` attention, and do not arouse active interest among the majority of future teachers.Learning and upbringing constitute a single pedagogical process, and insufficient attention to one of its components negatively affects the quality of the holistic educational process.That is why the task of forming a stable interest and professionally oriented motives among future teachers is extremely important to ensure high-quality professional pedagogical training.The science has proven and the practice has confirmed the fact that an employee with persistent internal, socially and personally significant professional motives works more productively than anyone without them. The concept "motivation" was introduced into scientific usage by A. Schopenhauer.One of the first scientific works devoted to the study of motives is the research of P. Young "Motives and Behavior".K. Levin considers motives as a meaningful characteristic of needs.Russian psychologists A. Leontiev and S. Rubinstein define the motive as a key concept of their theory of activity.S. Rubinstein emphasizes that the motives constitute the core of the personality.E. Ilyin considers the motivation in close connection with the development of the volitional sphere of a person [3].H. Heckhausen explains the motivation as a preparatory stage of action in the context of the interaction of a person and a situation [15].The results of studying motives and motivation are presented in the fundamental research and development of national (V.Aseev, V. Vilyunas, E. Ilyin, A. Leontiev, D. Uznadze) and foreign (A.Maslow, H. Heckhausen, etc.) scientists. Our contemporaries also do not disregard the problem of motivation and motives.A. Dubnyakova analyzes the main approaches to the study of the professional activity motives, proposes and substantiates the author's classification of the professional activity motives.S. Shabalina investigates the motivation of students` professional activity in the pedagogical college.N. Zhdanova focuses on the specificities of teachers' professional activity motivation.N. Panova researches motivation in the context of the teacher's personality development.The issues of the formation of engineering-pedagogical specialties students` professional orientation, closely related to the problem of motivation, are disclosed by O. Lnogradskaya [5].S. Umanets in the monograph offers his own vision of the way of resolving the contradiction between "the current cognitive and informational pedagogical paradigm of higher professional education, for which the natural motivational component of a student's life is not relevant, and the author's evolutionary-ontological concept, in which the natural essence of a human being and the process of his development is prioritized" [13]. Since the problem of motives and motivation by its nature is complex, multifactorial and multidimensional, modern science can hardly offer a universal means of forming the motivation for professional activity. Methodology The purpose of the study is the analysis of the future teachers` extracurricular professionally oriented activities as a factor of forming their professional motivation. The empirical source of this research is the creation and organization of special professionally oriented activities for future teachers as an upbringing part of their vocational education. Research concept and methodology.Our research concept is based on the hypothesis that the extracurricular activities of students (future teachers) are one of the factors that have a definite impact on the formation of their professional motivation.Research methods are the observation and individual surveys, the analysis of the scientific pedagogical and psychological literature, oral individual interviews and group conversations.On the way towards achieving the purpose, the following tasks were solved: with the help of the observation and individual surveys the attitude of students to their future profession and extracurricular activities was determined; the scientific pedagogical and psychological literature corresponding to the research problem was analyzed; the technology of forming a creative future teacher personality was used for attracting students and organizing their extracurricular professionally oriented activities; oral individual interviews and group conversation were used to determine and justify the effectiveness of extracurricular activities. The analysis of psychological and pedagogical literature defines that there is no single comprehensive definition of the professional motivation.Different scientific schools interpret the essence of the motivation in their own way.V. Vilyunas argues that the motivation is a set of processes that inspire and lead to action, and A. Platonov considers the motivation as a mental phenomenon -the combination of motives [9].In our research we interpret the motivation as "a set of persistent motives, interests, impulses that determines the content, direction and nature of a person's activity and person`s behavior" [10]. The research was carried out in two stages. Results At the first stage of the research the observation and individual surveys were fulfilled, and the results indicate that the future teachers have practically low motives for professional pedagogical activity (only 15%).I. Gladkaya, analyzing extracurricular activities as a factor of future teachers` professional training, demonstrates different approaches to understanding the essence of this notion.Some teachers equate the concepts of "extracurricular work" and "extracurricular activities", others share these concepts, emphasizing certain authority in the use of the term "work" compared to the concept "activity", which is more consistent with the personality-oriented pedagogical paradigm [2]. It seems to us that "extracurricular activity" is a pedagogically expedient students` activity aiming at future teacher personality`s forming and developing for his / her selfrealization by making up for deficits of learning and upbringing.In the process of creating and organizing extracurricular activities, it is advisable to take into account and focus on students` interests, inclinations, needs and capabilities.In addition, high school teachers and tutors are recommended not to forget J. Locke`s idea that the art of a teacher is making attractive for a student those things that are useful for him.Professionally oriented extracurricular activity is a necessary component of future teachers` professional training, but it does not mean that the teacher's understanding of its necessity will be shared by students to the same extent.Moreover, as we noted earlier, very often future teachers (for example, future specialists in labor protection, information and computer technology, economics and management), even at undergraduate courses, have practically low motives for professional pedagogical activity.The identification of the causes of the current situation is the task of another research project. O. Lnogradskaya, researching the phenomenon of the engineering-pedagogical specialties students` professional orientation, notes that only a fifth of students who choose these areas of training intends to take teaching job in the future [5].Her findings coincide with the results of our research of future foreign language teachers` professional motivation, which we conducted during several years earlier.The conclusion is that the social prestige of teaching after the "failure" in the 1990s is being restored extremely 210, 18028 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021018028slowly.As another factor of significant influence on future teachers` professional motivation can be considered the fact that a special (learning) component of teacher training traditionally prevails over pedagogical (upbringing) aspects in a "classical" university. The second stage of the research is devoted to the creation and organization of future teachers` special professionally oriented activities and the following analysis of the professional motivation level and the level of readiness to fulfill their professional functions. To eliminate the existing misbalance in the engineering and pedagogical personnel`s professional training, students of the Donetsk National University annually take part in the Republican Olympiad in Pedagogy with international participation.The process of involving students in extracurricular professionally oriented activities was quite complicated.It was necessary to convince the students of the expediency of this type of activity, of its usefulness for each of them personally.With the help of the Faculty Student Union`s activists and due to individual conversations with the students, the candidates to the Olympiad team were individually selected by special team-tutors (they were the teachers-volunteers from the Department of Pedagogy).The participants agreed to try just out of curiosity to something new and out of respect to their teachers.Nevertheless, the actual interest and motivation of the Olympiad team members towards this type of extracurricular activity was rather low -we can confirm this fact from the team-tutor`s point of view. Traditionally, in the Donbass before the beginning of the civil war in Ukraine in 2014, the Olympiads in Pedagogy consisted of three stages: a theoretical creative written essay, the defense of scientific work and a practical task that proposed to the participants to conduct a fragment of a lesson or an educational event.I. Gladkaya describes the traditional Olympiad in Pedagogy, organized by the colleagues from St. Petersburg, which also includes three stages: a written home essay, the work with a case, the demonstration of a lesson and the reflection on the results of the school working day [2].The organizers of the Republican Olympiad in Pedagogy in the Donetsk People's Republic have created an alternative model.It is caused -on the one hand -by the difficulties of the political, social and economic situation, in which Donbass has already been for the seventh year.On the other hand -the Donetsk Republic Olympiad reflects the competence approach to the professional teachers` training and the holistic approach that focuses on the organic link of learning and upbringing based on the main principles of A.S. Makarenko and V.A. Sukhomlinskiy`s pedagogy. The slogan "Doing good is our way!" has become the motto of the Olympiad that implies the strengthening of the practical orientation for the participants' activity.This model allows to fulfill not only educational tasks defined by the State Educational Standards, but also to solve associated social problems: the popularization of teaching jobs and the provision of volunteer assistance to the residents of the region.These reference points explain the fact that in the motto of the event there is no direct pedagogical component, it can be found only in the practical content of the Olympiad.This situation contains an element of intrigue for the participants (something new, unusual and even unexpected in comparison with the previous traditional Olympiads in Pedagogy) that stimulates their interest, attention and activity.It should be also noted that the Olympiad is in fact a distance project on the online-platform that provides an opportunity to increase the number of national and international teams and in consequence -to make the event more competitive.Simultaneously the distance form allows to reduce the participants` material costs for taking part in the Olympiad. The first stage of the Olympiad supposes the team-presentation developing in the form of virtual "business cards".The main task is to represent your future teaching job in the 210, 18028 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021018028most interesting, non-standard creative way, to make a video clip with the team's performance (the period is limited to 5 minutes).Moreover, the content of the team's presentation is to correlate with the motto of the Olympiad, its pedagogical and volunteer orientation and to unleash the creative potential of each member of a team-participant.The fulfillment of the first Olympiad task considerably intensifies the students` interest to the event and their creative activity, makes them look at the future profession from an unusual point of view and turn to special literature and the Internet for additional information, integrate existing knowledge and experience in the field of pedagogy and information and computer technology.The need to complete the assignment expands the horizons of the students` pedagogical universe, taking them beyond the daily routine, strengthening their inspiration for future teaching job and the desire to act, to make progress and to win the Olympiad.At the first stage of the Olympiad we actually have observed the occurrence of professional motivation.The students have become well-concentrated and self-motivated, have understood the goals and tasks facing them. The second stage of the Olympiad is theoretical.It includes the online testing of special pedagogical knowledge: the participants are to answer 50 questions in a limited period of time.The list of special literature for preparing to the second stage is previously agreed between the Olympiad jury and team tutors.The students have developed a desire to cope with the tests as best as possible: the need for knowledge, self-actualization [8], the necessity to test and analyze the level of their knowledge and readiness for pedagogical activity have appeared.The students have realized the personal sense of participation in the Olympiad as the possibility of the assessment of their skills and forces.As an additional result of this stage we have also got the increase of the students` professional motivation to future teaching job. The third stage suggests the preparation and holding of a volunteer pedagogical action.Each team has preferred to demonstrate the specific characteristics of its teaching job specialty.With the help of team-tutors the participants have chosen the theme of the action.The future pre-school teachers and primary school teachers have organized the educational events devoted to Russian decorative and applied art and have conducted them for the kindergarten and primary school children.The heroes of Russian folk fairy-tales have come to the first graders and schoolchildren and told about folk crafts, have taught the children to draw folk ornaments in different styles and technics.The future secondary school teachers have organized the excursions to the places of Labor Glory in Donetsk for the secondary school students.The future high school teachers together with the team-tutor and other students have fixed the places of Donbass defenders` military burial sites up.The future computer science teachers have organized the master classes in information technology for the beginners.The last stage of the Olympiad implies the manifestation of the participants` ability to share their knowledge and experience with the others.In fact, this coincides with one of the traditional Olympiad tasks -the conducting a lesson or an educational event, that enables the participants to demonstrate the level of the professional competence they have achieved. At the third stage the future teachers have demonstrated: an adequate critical thinking level in setting the goals and in choosing and justifying the content of their volunteer pedagogical activity; a good level of the ability to work in a team (to hear and to listen to each other, to communicate with various partners, to take or to choose an optimal decision); a high level of creative project activity.As a result -the students have understood social significance of pedagogical activity and the important role of teaching job in the society. Since the motivation for activity can be considered as a system "of internal motivating elements of a person, such as needs, interests and values, and -of the external environment factors" [11], we can summarize that the Donetsk National University students` participation in the Republican Olympiad in Pedagogy has been at first an external 210, 18028 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021018028environment factor but the appeared professional interest of future teachers in the content of the Olympiad stages has transformed the external factor into an internal one.The conducted surveys show that almost all the members of the Olympiad team have got interested in extracurricular professionally oriented activities (in fact, 100%); the same answers we have got from the majority of the students who helped the team in preparing for the Olympiad stages (57%) (Fig. 1). (1 before Olympiad; 2 after Olympiad) Fig. 1.Professional motivation for future pedagogical activity However, the professional motivation for future pedagogical activity has arisen only in 30% of the students who were directly or indirectly involved in the preparation and participation in extracurricular professionally oriented activity. Conclusion The reflection after the event is an obligatory part of the educational process.The students and the team-tutor have gathered to discuss the results of the Olympiad (the second place), to search the answers to some questions: what worked out and why; what did not work and why; what was not enough for us to get a better result and to win; what we need to do to improve the situation the next year.Together with the students we solved "the problem of realizing of the personal sense" [4] and significance of extracurricular professionally oriented activities.As our further experience shows, it has changed the attitude of the future teachers to such activities.After the Olympiad the students have begun to show initiative and activity in defining the content and forms of their own extracurricular activities, to approach the teachers` instructions more responsibly, to take a direct part in various types of extracurricular activities more willingly. Summing up we should note that the use of only one type of extracurricular professionally oriented activities cannot serve as a universal means of forming future teachers` professional motivation.We suppose that the organization of students` extracurricular activities should be based on the following principles: humanization of pedagogical activity, its social pre-conditioning and orientation, complexity, scientific nature, creative activity and self-development.This approach allows to consider the extracurricular activity as a component of the professional teacher training.The future students` extracurricular activities establish the preconditions for sustained professional motivation and "provide ample opportunity for forming such a key professional competence as the readiness and the ability to self-development and creative self-realization" [14].The results of our study make it possible to assert that the creative approach, non-standard content and forms of future teachers` extracurricular activities function as an important catalyst in the formation of professional motivation.
2020-12-10T09:05:01.610Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "f9f72c8b689fd8514ee6fc76479b3f5f6470de7a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1051/e3sconf/202021018028", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8701aeed31e95998b7975bcbda992bed74f6fbe0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
106404183
pes2o/s2orc
v3-fos-license
Electronic Communication between Dithiolato-Bridged Diiron Carbonyl and S-Bridged Redox-Active Centres The catalytic potential of linked redox centres is exemplified by the catalytic site of [FeFe]-hydrogenases, which feature a diiron subsite linked by a cysteinyl S atom to a 4Fe4S cube. The investigation of systems possessing similarly-linked redox sites is important because it provides a context for understanding the biological system and the rational design of abiological catalysts. The structural, electrochemical and spectroscopic properties of Fe2(CO)5(CH3C(CH2S)2CH2SPhNO2, I-bzNO2 and the aniline analogue, I-bzNH2, are described and IR spectroelectrochemical studies have allowed investigation of the reduction products and their reactions with CO and protons. These measurements have allowed identification of the nitrobenzenyl radical anion, quantification of the shifts of the (CO) bands on ligand-based reduction compared with NO2/NH2 exchange and protonation of the pendent ligand. The strength of thioether coordination is related to the electronic effects, where competitive binding studies with CO show that CO/thioether exchange can be initiated by redox processes of the pendent ligand. Stoichiometric multi electron/proton transfer reactions of I-bzNO2 localised on nitrobenzene reductions occur at mild potentials and a metal-centred reduction in the presence of protons does not lead to significant electrocatalytic proton reduction. Introduction Dithiolato-bridged diiron carbonyl compounds have important structural and functional similarities with the catalytic site of [FeFe]-hydrogenases [1][2][3][4][5][6] and this has driven detailed investigations into the chemistry of this class of compound [7][8][9][10][11][12].Critical to the functioning of the catalytic centre, or H-cluster, is the transfer of electrons between the diiron subsite and a 4Fe4S cube which is linked through a bridging S atom from a cysteinyl residue.It has been demonstrated that tripodal bridging ligand systems can be exploited to provide the diiron-bridging dithiolato S atoms together with a thioether which is able to bind, reversibly, to one of the Fe atoms [13,14].We have previously shown that such systems can be used to generate mixed carbonyl, cyano complexes which are excellent spectroscopic models of the CO-inhibited forms of the H-cluster [15,16] and have allowed the preparation of a diiron subsite with S-atom linking to a 4Fe4S cube-mimicking key characteristics of the full H-cluster [17].An important extension of this work is the exploration of the impact of a redox-active group bound to the linking S-atom on the chemistry of the system [18][19][20][21].Interaction between the diiron and S-linked redox groups can be studied by exploiting the sensitivity of the ν(CO) bands to the electronic structure of the metal to which the CO group is bound.Further, cases where the remote redox centre has similarly sensitive reporter bands allow reliable assignment of the character of the redox process and interpretation of the chemistry which follows oxidation or reduction.Nitrobenzene presents an ideal redox partner for such studies.The compound has a well-defined one-electron reduction at mild potentials and the nitro group gives strong NO 2 stretching bands in the IR spectrum with wavenumbers which are sensitive to the local environment.The interaction between the linked redox sites can be explored through the examination of the impact of altering the redox state of a proximal nitrobenzene active group on the {2Fe3S}-carbonyl subsite, to which it is attached by S-ligation (I-bzNO 2 , Scheme 1). In this investigation we set out to (i) establish whether the primary reduction of I-bzNO2 is localised on the nitrobenzene group, (ii) explore the impact of the pendent redox group on reduction of the diiron subsite by comparison of the reduction chemistry of I-bzNO2 and the related aniline adduct I-bzNH2 and (iii) examine the reduction chemistry of I-bzNO2 under reducing conditions in the presence of a non-coordinating acid, 2,6-lutidinium (LutH + ). Electrochemistry of I-bzNO2 and I-bzNH2 The nitrobenzene complex, I-bzNO2, shows two well-defined reduction processes and a more complicated reoxidation wave (Figure 1).The primary electron-transfer process at E°' = −0.85V versus Ag/AgCl is a diffusion-controlled, single-electron, reversible reduction; thus the peak separation ΔE = 75 mV (293 K) is close to that expected for a one-electron process; the peak current, ip reduction , is linearly dependent on the square-root of the scan-rate, ν 1/2 , and the peak current ratio, ip oxidation /ip reduction , is close to unity.The second partly reversible reduction (Ep = −1.21V) is similar to that of related complexes with different pendent thioethers [13], but the more positive potential and high reversibility of the primary electron-transfer step suggests that it is associated with the pendent nitrobenzene group.This conclusion is supported by the similarity of the cyclic voltammetric response of nitrobenzene and the uncoordinated tripodal ligand (Figure S1).In this investigation we set out to (i) establish whether the primary reduction of I-bzNO 2 is localised on the nitrobenzene group, (ii) explore the impact of the pendent redox group on reduction of the diiron subsite by comparison of the reduction chemistry of I-bzNO 2 and the related aniline adduct I-bzNH 2 and (iii) examine the reduction chemistry of I-bzNO 2 under reducing conditions in the presence of a non-coordinating acid, 2,6-lutidinium (LutH + ). Electrochemistry of I-bzNO 2 and I-bzNH 2 The nitrobenzene complex, I-bzNO 2 , shows two well-defined reduction processes and a more complicated reoxidation wave (Figure 1).The primary electron-transfer process at E • ' = −0.85V versus Ag/AgCl is a diffusion-controlled, single-electron, reversible reduction; thus the peak separation ∆E = 75 mV (293 K) is close to that expected for a one-electron process; the peak current, i p reduction , is linearly dependent on the square-root of the scan-rate, ν 1/2 , and the peak current ratio, i p oxidation /i p reduction , is close to unity.The second partly reversible reduction (E p = −1.21V) is similar to that of related complexes with different pendent thioethers [13], but the more positive potential and high reversibility of the primary electron-transfer step suggests that it is associated with the pendent nitrobenzene group.This conclusion is supported by the similarity of the cyclic voltammetric response of nitrobenzene and the uncoordinated tripodal ligand (Figure S1). The metal-based reduction of Fe 2 (µ-SCH 2 ) 2 C(CH 3 )SR(CO) 5 compounds [13] exhibit limited reversibility with the potential of peak cathodic current (E pc ) sensitive to the donor properties of the R group.The similarity of the E pc values for the diiron-based reduction of I-bzNO 2 and I-bzNH 2 suggests that the S-bound reduced nitrobenzene and aniline groups have similar electronic interaction with the diiron centre.For both complexes weak re-oxidation features are evident in the anodic scan. The addition of a proton source to the reaction mixture leads to significant increases in current response consistent with multielectron reduction.The nitrobenzene-localised reduction of I-bzNO 2 is shifted to more positive potentials and current increase and loss of reversibility is very similar to that observed for nitrobenzene (Figure S2) indicative of the replacement of the one-electron reversible process by a 6 electron/6 proton reaction at high acid concentration.The reaction product, I-bzNH 2 , may then undergo metal-based electron/proton reactions where acid-dependent voltammetry of I-bzNH 2 reveal shifts of the onset reduction potential and marked increases in current consistent with electrocatalytic proton reduction (Figure 2).The shift of the reduction potential to more positive potentials with increasing acid concentration is indicative of coupled electron proton reactions and the sigmoidal character of the onset current may indicate that the initial electron-proton product has limited electrocatalytic activity.This detail of the electrochemical response is explored with the aid of spectroelectrochemical methods in Section 2.6. A compound with diiron subsites and nitrobenzene components has been reported by Sun and coworkers, [Fe 2 (CO) 6 {(µ-SCH 2 ) 2 N(C 6 H 4 -p-NO 2 )}] (Fe 2 -adt(bzNO 2 )) [22].In that case the reduction processes associated with the diiron centres was attributed to the first reduction wave and the reduction of nitrobenzene was not explicitly considered.Addition of HOAc to solutions of Fe 2 -adt(bzNO 2 ) were observed to give an increased current response which was attributed to electrocatalytic proton reduction [22].The metal-based reduction of Fe2(μ-SCH2)2C(CH3)SR(CO)5 compounds [13] exhibit limited reversibility with the potential of peak cathodic current (Epc) sensitive to the donor properties of the R group.The similarity of the Epc values for the diiron-based reduction of I-bzNO2 and I-bzNH2 suggests that the S-bound reduced nitrobenzene and aniline groups have similar electronic interaction with the diiron centre.For both complexes weak re-oxidation features are evident in the anodic scan. The addition of a proton source to the reaction mixture leads to significant increases in current response consistent with multielectron reduction.The nitrobenzene-localised reduction of I-bzNO2 is shifted to more positive potentials and current increase and loss of reversibility is very similar to that observed for nitrobenzene (Figure S2) indicative of the replacement of the one-electron reversible process by a 6 electron/6 proton reaction at high acid concentration.The reaction product, I-bzNH2, may then undergo metal-based electron/proton reactions where acid-dependent voltammetry of I-bzNH2 reveal shifts of the onset reduction potential and marked increases in current consistent with electrocatalytic proton reduction (Figure 2).The shift of the reduction potential to more positive potentials with increasing acid concentration is indicative of coupled electron proton reactions and the sigmoidal character of the onset current may indicate that the initial electron-proton product has limited electrocatalytic activity.This detail of the electrochemical response is explored with the aid of spectroelectrochemical methods in Section 2.6. A compound with diiron subsites and nitrobenzene components has been reported by Sun and coworkers, [Fe2(CO)6{(μ-SCH2)2N(C6H4-p-NO2)}] (Fe2-adt(bzNO2)) [22].In that case the reduction processes associated with the diiron centres was attributed to the first reduction wave and the reduction of nitrobenzene was not explicitly considered.Addition of HOAc to solutions of Fe2adt(bzNO2) were observed to give an increased current response which was attributed to The metal-based reduction of Fe2(μ-SCH2)2C(CH3)SR(CO)5 compounds [13] exhibit limited reversibility with the potential of peak cathodic current (Epc) sensitive to the donor properties of the R group.The similarity of the Epc values for the diiron-based reduction of I-bzNO2 and I-bzNH2 suggests that the S-bound reduced nitrobenzene and aniline groups have similar electronic interaction with the diiron centre.For both complexes weak re-oxidation features are evident in the anodic scan. The addition of a proton source to the reaction mixture leads to significant increases in current response consistent with multielectron reduction.The nitrobenzene-localised reduction of I-bzNO2 is shifted to more positive potentials and current increase and loss of reversibility is very similar to that observed for nitrobenzene (Figure S2) indicative of the replacement of the one-electron reversible process by a 6 electron/6 proton reaction at high acid concentration.The reaction product, I-bzNH2, may then undergo metal-based electron/proton reactions where acid-dependent voltammetry of I-bzNH2 reveal shifts of the onset reduction potential and marked increases in current consistent with electrocatalytic proton reduction (Figure 2).The shift of the reduction potential to more positive potentials with increasing acid concentration is indicative of coupled electron proton reactions and the sigmoidal character of the onset current may indicate that the initial electron-proton product has limited electrocatalytic activity.This detail of the electrochemical response is explored with the aid of spectroelectrochemical methods in Section 2.6. EPR Specroscopy of the In-Situ Generated Reduction Product of I-bzNO2 Despite the electrochemical reversibility of the primary reduction process of I-bzNO 2 , the one-electron reduced product was unstable under anaerobic conditions and it was necessary to employ in-situ methods of sample generation in order to obtain EPR spectra.Application of potentials sufficient to generate a significant cathodic current were accompanied by the appearance of weak, but highly resolved spectra (Figure 3).Simulation of the spectra are consistent with a radical centred on the pendent nitrobenzene group with a g value of 2.00543 and hyperfine coupling dominated by a single N (a N = 24.43MHz) and inequivalence of the coupling constants to the four protons of the substituted-nitrobenzene ring (a H = 10.3, 8.4, 3.6 and 3.4 MHz).The inequivalence of the nitrobenzene protons is accounted for by hindered conformational change of the pendent nitrobenzene group.Weaker hyperfine coupling, to the methylene protons of the linker (a H = 1.4 MHz), are also evident in the spectra.While the conformation of the thioether-bound form of the complex renders the methylene protons inequivalent, the simulation shown in Figure 3 was calculated using a single coupling constant for these protons.Support for coupling of the methylene protons is based on DFT calculations showing a small population of the SOMO on the methyl group and reported EPR spectra of reduced 1-(methylthio)-4-nitrobenzene in DMSO [23].The hyperfine coupling constants a N , a 2,6 , a 3,5 and a R have values corresponding to 26.7, 9.46, 3.24, 0.64 MHz.The transient character of the reduced product was confirmed by the observed loss of the EPR product within ca. 1 min of switching the potentiostat to open circuit. EPR Specroscopy of the In-Situ Generated Reduction Product of I-bzNO2 Despite the electrochemical reversibility of the primary reduction process of I-bzNO2, the oneelectron reduced product was unstable under anaerobic conditions and it was necessary to employ in-situ methods of sample generation in order to obtain EPR spectra.Application of potentials sufficient to generate a significant cathodic current were accompanied by the appearance of weak, but highly resolved spectra (Figure 3).Simulation of the spectra are consistent with a radical centred on the pendent nitrobenzene group with a g value of 2.00543 and hyperfine coupling dominated by a single N (aN = 24.43MHz) and inequivalence of the coupling constants to the four protons of the substituted-nitrobenzene ring (aH = 10.3, 8.4, 3.6 and 3.4 MHz).The inequivalence of the nitrobenzene protons is accounted for by hindered conformational change of the pendent nitrobenzene group.Weaker hyperfine coupling, to the methylene protons of the linker (aH = 1.4 MHz), are also evident in the spectra.While the conformation of the thioether-bound form of the complex renders the methylene protons inequivalent, the simulation shown in Figure 3 was calculated using a single coupling constant for these protons.Support for coupling of the methylene protons is based on DFT calculations showing a small population of the SOMO on the methyl group and reported EPR spectra of reduced 1-(methylthio)-4-nitrobenzene in DMSO [23] The simulated spectrum was calculated using the EasySpin suite of routines [24].Details of the simulation parameters are given in Table S1. Assignment of the EPR spectrum to [I-bzNO2] − is based on the g value and the good agreement between the relative magnitudes of the hyperfine coupling constants for [I-bzNO2] − and those of the radical anion of 1-(methylthio)-4-nitrobenzene.The inequivalence of the hyperfine coupling constants of the four protons of the substituted nitrobenzene ring is indicative of the bridging thioether remaining in the closed, bound, form for the reduced species.The barrier to conformational change of the substituted nitrobenzene group of I-bzNO2 has been estimated from NMR spectroscopy (Table S2).The coalescence temperature of the ortho and meta protons is 200 K and the temperature-dependence of the NMR spectra can be simulated with a ΔG ‡ of 38-40 kJ•mol −1 .If the The simulated spectrum was calculated using the EasySpin suite of routines [24].Details of the simulation parameters are given in Table S1. Assignment of the EPR spectrum to [I-bzNO 2 ] − is based on the g value and the good agreement between the relative magnitudes of the hyperfine coupling constants for [I-bzNO 2 ] − and those of the radical anion of 1-(methylthio)-4-nitrobenzene.The inequivalence of the hyperfine coupling constants of the four protons of the substituted nitrobenzene ring is indicative of the bridging thioether remaining in the closed, bound, form for the reduced species.The barrier to conformational change of the substituted nitrobenzene group of I-bzNO 2 has been estimated from NMR spectroscopy (Table S2).The coalescence temperature of the ortho and meta protons is 200 K and the temperature-dependence of the NMR spectra can be simulated with a ∆G ‡ of 38-40 kJ•mol −1 .If the coordinate bond between the Fe and the thioether is lost for the EPR-active species, it is expected that the ortho and meta protons will become equivalent on the EPR timescale as is observed for substituted nitrobenzene radicals [23]. Given the assignment of the EPR spectrum to [I-bzNO 2 ], the magnitude of the hyperfine coupling constants can be used to assess the localization of charge on the diiron and nitrobenzene fragments of the molecule.Notwithstanding the small sensitivity of the hyperfine coupling constants of the anion of 1-(methylthio)-4-nitrobenzene to the solvent [23], the similarity of the magnitude of the values obtained for [I-bzNO 2 ] − suggests that reduction is strongly localised on the nitrobenzene fragment.This observation is consistent with arguments based on the electrochemical measurements discussed in the previous section. Infrared Spectroelectrochemistry (IR-SEC) of the Reduction of I-bzNH 2 The reduction chemistry of the diiron subsite, free of complications associated with the redox chemistry of the pendent nitrobenzene moiety, is most conveniently explored using the aniline analogue, I-bzNH 2 .IR-SEC methods have been developed which allow collection of spectra from samples maintained in a strictly anaerobic environment and under elevated pressures of selected gases [25].This allows the collection of IR spectra through the redox process, allowing the identification of longer-lived (>ca. 2 s) intermediate products.The spectra are presented in differential absorption format and are calculated using as reference the spectrum of the thin film of solution trapped between the working electrode and the IR-transmitting window at a resting potential of the system (usually 0 V) immediately before application of a reducing potential.Reduction at −1.2 V leads to at least two products which are evident either immediately following application of a reducing potential (Figure 4a) or at longer times (Figure 4b).The product bands (1910, 1890 and 1830 cm −1 ) are in a spectral region consistent with CO terminally-bound to electron-rich iron centres.The broad band profile is insufficiently distinct to provide significant insight into the possible structures of the reduced species, however the absence of bands near 1700 cm −1 would suggest that CO-bridged species are not formed in significant concentration.This behaviour contrasts with the chemistry of reduced dithiolato-bridged hexacarbonyl complexes [26][27][28].Re-oxidation of the sample at 0 V leads to ca. 20% recovery of the starting material.The residual bands in the spectrum have a band profile significantly different from the starting complex (Figure 4c). IR-SEC of the Reduction of I-bzNO2 The study of the interaction between the diiron-centred and nitrobenzene-centred redox Inorganics 2019, 7, 37 6 of 18 IR-SEC of the Reduction of I-bzNO 2 The study of the interaction between the diiron-centred and nitrobenzene-centred redox processes is facilitated by the strong IR chromophores sensitive to the electron richness of the respective centres.As suggested by the partial-reversibility of the electrochemistry (Figure 1), the reduced compound is unstable and interrogation of these species requires rapid collection of IR spectra during reduction.IR-SEC measurements recorded following reduction at potentials between the first and second reduction processes of I-bzNO 2 and reoxidation cycles are shown in Figure 5. Experiments have been conducted using both acetonitrile and dichloromethane as solvent.The spectra recorded in dichloromethane give a better-defined separation between the two reduction processes and additionally are less affected by solvent absorption in the 1200 to 1600 cm −1 region.This allows better characterisation of the spectral changes in the NO 2 stretching region. Inorganics 2019, 7, x FOR PEER REVIEW 7 of 19 shifted following reduction of I-bzNO2, with the asymmetric and symmetric stretches shifting from 1527 and 1345 cm −1 to 1364 and 1243 cm −1 .The large wavenumber shifts of the NO stretching vibrations implies significant involvement of the nitro group in the SOMO, a conclusion consistent with the large hyperfine coupling constant for the N atom of [I-bzNO2] − .It is noted that the higher noise level in the IR spectra between 1250 and 1280 cm −1 is associated with strong solvent absorption and this distorts the apparent wavenumber and profile of the lower-wavenumber growth band (1243 cm −1 ).The application of a re-oxidising potential leads to the rapid depletion of the growth bands with concomitant recovery of ca.75% recovery of the starting complex (Figure 5b).Reduction of the diiron subsite of I-bzNO2 can be achieved by application of more reducing potentials, where sequential reduction of the nitrobenzene and diiron fragments is shown in Figure 6.After generation of [I-bzNO2] − further reduction leads to shifts of the ν(CO) bands, together with the aryl and NO2 stretching bands (Figure 6b).The ν(CO) bands give a group of broad, unresolved bands centred about 1913, 1885 and 1822 cm −1 which are not experimentally distinguishable in form and wavenumber from those obtained following diiron-based reduction of I-bzNH2 (Figure 4).Both the complexity of the band profile and variations in the time-dependence of the increases in absorbance at different wavenumbers suggest that a mixture of products is formed.The current response for the second reduction is at least twice that of the 1-electron nitrobenzene-localised reaction, this being consistent with a 2-electron reduction at the diiron centre (Figure 6e), possibly with further rearrangement and reduction processes.Re-oxidation at −0.7 V leads to loss of the lower wavenumber ν(CO) bands (at 1913, 1885 and 1822 cm −1 ) with some recovery of the starting complex (Figure 6c) and another species can be oxidised at 0 V (Figure 6d) but less than 10% of the starting complex is recovered in these experiments.The markedly different pattern of ν(CO) bands indicates a significant change in the geometry of the diiron pentacarbonyl core. More rapid generation of the reduced species by stepping the potential directly into the second reduction results in transient formation of [I-bzNO2] − with a subsequent band profile analogous to The spectral changes in the ν(CO) region following reduction of I-bzNO 2 at mild potentials are complex but diagnostic of a small spectral shift of the band profile (Figure 5a).In the early stages of the reduction the pattern of ν(CO) bands is retained, consistent with the retention of the geometry about the diiron centre.The complete removal of the bands due to I-bzNO 2 and the small wavenumber shift of the ν(CO) bands is more accurately determined with reference to the single beam spectra which are available as Figure S4.The 5-6 cm −1 shift of the highest-wavenumber band to lower wavenumber may be compared with the 71 cm −1 shift of the highest wavenumber ν(CO) band of Fe 2 (µS(CH 2 ) 3 S)(CO) 6 following metal-centred one-electron reduction [29,30].As the reduction proceeds there is a growth of lower-wavenumber ν(CO) bands (1958,1923 and 1892 cm −1 ) which are due to a byproduct which is not reoxidised at 0 V.The lower wavenumber region (1200-1650 cm −1 ) has symmetric and antisymmetric stretches of the nitro group together with ring vibrations of the substituted benzene.Since reduction leads to population of π antibonding orbitals it is likely that the ring stretching vibrations (1601 and 1580 cm −1 ) shift to lower wavenumbers, with the growth band near 1566 cm −1 a plausible candidate.This implies that the NO stretching vibrations are significantly shifted following reduction of I-bzNO 2 , with the asymmetric and symmetric stretches shifting from 1527 and 1345 cm −1 to 1364 and 1243 cm −1 .The large wavenumber shifts of the NO stretching vibrations implies significant involvement of the nitro group in the SOMO, a conclusion consistent with the large hyperfine coupling constant for the N atom of [I-bzNO 2 ] − .It is noted that the higher noise level in the IR spectra between 1250 and 1280 cm −1 is associated with strong solvent absorption and this distorts the apparent wavenumber and profile of the lower-wavenumber growth band (1243 cm −1 ).The application of a re-oxidising potential leads to the rapid depletion of the growth bands with concomitant recovery of ca.75% recovery of the starting complex (Figure 5b). Reduction of the diiron subsite of I-bzNO 2 can be achieved by application of more reducing potentials, where sequential reduction of the nitrobenzene and diiron fragments is shown in Figure 6.After generation of [I-bzNO 2 ] − further reduction leads to shifts of the ν(CO) bands, together with the aryl and NO 2 stretching bands (Figure 6b).The ν(CO) bands give a group of broad, unresolved bands centred about 1913, 1885 and 1822 cm −1 which are not experimentally distinguishable in form and wavenumber from those obtained following diiron-based reduction of I-bzNH 2 (Figure 4).Both the complexity of the band profile and variations in the time-dependence of the increases in absorbance at different wavenumbers suggest that a mixture of products is formed.The current response for the second reduction is at least twice that of the 1-electron nitrobenzene-localised reaction, this being consistent with a 2-electron reduction at the diiron centre (Figure 6e), possibly with further rearrangement and reduction processes.Re-oxidation at −0.7 V leads to loss of the lower wavenumber ν(CO) bands (at 1913, 1885 and 1822 cm −1 ) with some recovery of the starting complex (Figure 6c) and another species can be oxidised at 0 V (Figure 6d) but less than 10% of the starting complex is recovered in these experiments.The markedly different pattern of ν(CO) bands indicates a significant change in the geometry of the diiron pentacarbonyl core. Redox-Dependent Competitive Binding of CO to I-bzNO2 and I-bzNH2 The products of diiron-centred reduction of I-bzNO2 and I-bzNH2 differ from those obtained following reduction of the diiron hexacarbonyl analogues, where this is most marked by an absence of bands in the region between 1700 and 1800 cm −1 , those typical of a bridging CO moiety.Previous studies have indicated that the binding of the pendent S group is sensitive to oxidation state and a likely cause of the differing reaction chemistry is associated with the lability of the linking S atom.More rapid generation of the reduced species by stepping the potential directly into the second reduction results in transient formation of [I-bzNO 2 ] − with a subsequent band profile analogous to that of the final spectrum of Figure 6b.Re-oxidation in these experiments lead to similarly poor recovery of the starting material.These experiments show that the metal centred reduction of I-bzNO 2 is both electrochemically and chemically irreversible.Most likely this proceeds through a two-electron reduction of the diiron centre with rearrangement and/or ligand dissociation.The products of diiron-centred reduction of I-bzNO 2 and I-bzNH 2 differ from those obtained following reduction of the diiron hexacarbonyl analogues, where this is most marked by an absence of bands in the region between 1700 and 1800 cm −1 , those typical of a bridging CO moiety.Previous studies have indicated that the binding of the pendent S group is sensitive to oxidation state and a likely cause of the differing reaction chemistry is associated with the lability of the linking S atom.The relative strength of interaction between the Fe atom and the bridging S atom can be deduced from competition studies involving an alternative ligand.In this regard CO is an ideal ligand as it binds strongly to the Fe centre to give a hexacarbonyl species with a well-defined spectral profile.Measurements for a series of pendent thioethers are summarised in Table 1 of ref. [31].There is a clear relationship between the electron-withdrawing character of the substituent and the equilibrium constant for thioether replacement by CO with I-bzNO 2 having a value ca.30 times larger than that of I-bzNH 2 .If it is assumed that the free energies of the hexacarbonyl species are approximately the same, then the K eq values will reflect the relative stability of the S-bound form.The strongest interaction is in the cases where the substitute is most electron rich. The addition of CO to the gas mixture in the IR-SEC cell containing I-bzNH 2 gives a mixture of pentacarbonyl and hexacarbonyl species.Reduction leads to depletion of the bands of both species, with the loss of the hexacarbonyl species slightly preferred over I-bzNH 2 .It is important to note that among the products generated during the reduction reaction is a species giving rise to a weak band near 1720 cm −1 (Figure 7a).The wavenumber of this band is consistent with its assignment to a bridging CO group.Application of increasingly positive potentials to the solution result in oxidation first of the species with ν(CO) bands between 1800 and 1900 cm −1 (Figure 7b), then of species with higher-wavenumber ν(CO) bands (Figure 7c) before oxidation of the species with the bridging CO group (Figure 7d).Similarly high potentials were required to oxidise the most stable CO-bridged species obtained after reduction of Fe 2 (µ-(SCH 2 ) 2 CH 2 )(CO) 6 [29]. The impact of a change in the electron richness of the pendent group on the equilibrium constant for CO displacement of the bridging S atom is well illustrated by experiments of I-bzNO 2 in CH 2 Cl 2 conducted under elevated pressure of CO (Figure 8).The application of mild reducing potentials leads to depletion bands of the nitrobenzene and, at a slower initial rate, the diiron hexacarbonyl fragment.Notably, the growth ν(CO) bands have a different pattern from those of I-bzNH 2 obtained in related experiments (Figure 7) and the wavenumbers of the bands (2048,1983,1964 and 1930 cm −1 ) are in good agreement with the spectral bands of [I-bzNO 2 ] − (Figure 5) after taking into account the different depletion bands in the two experiments.In addition to the bands due to the terminally-bound CO groups, there is a weak band at 1690 cm −1 which appears to be associated with the initial product.The reduction products of the reaction are highly dependent on the final potential and experiments conducted with a more reducing potential give more complex spectra with further reduction of the diiron core.The spectral features obtained following reduction of CO-saturated CH 2 Cl 2 solutions of I-bzNO 2 are consistent with the initial species in the form of the S-dissociated hexacarbonyl species and the initial one-electron reduction product having re-coordination of the bridging S-atom.The explanation of the 1690 cm −1 band remains open.The very low wavenumber of the band is inconsistent with its assignment to a bridging CO group as judged by the electron richness of the diiron centre based on the wavenumbers of the terminally-bound ν(CO) modes.near 1720 cm −1 (Figure 7a).The wavenumber of this band is consistent with its assignment to a bridging CO group.Application of increasingly positive potentials to the solution result in oxidation first of the species with ν(CO) bands between 1800 and 1900 cm −1 (Figure 7b), then of species with higher-wavenumber ν(CO) bands (Figure 7c) before oxidation of the species with the bridging CO group (Figure 7d).Similarly high potentials were required to oxidise the most stable CO-bridged species obtained after reduction of Fe2(μ-(SCH2)2CH2)(CO)6 [29].The impact of a change in the electron richness of the pendent group on the equilibrium constant for CO displacement of the bridging S atom is well illustrated by experiments of I-bzNO2 in CH2Cl2 conducted under elevated pressure of CO (Figure 8).The application of mild reducing potentials leads to depletion bands of the nitrobenzene and, at a slower initial rate, the diiron hexacarbonyl fragment.Notably, the growth ν(CO) bands have a different pattern from those of I-bzNH2 obtained in related experiments (Figure 7) and the wavenumbers of the bands (2048,1983,1964 and 1930 cm −1 ) are in good agreement with the spectral bands of [I-bzNO2] − (Figure 5) after taking into account the different depletion bands in the two experiments.In addition to the bands due to the terminallybound CO groups, there is a weak band at 1690 cm −1 which appears to be associated with the initial product.The reduction products of the reaction are highly dependent on the final potential and experiments conducted with a more reducing potential give more complex spectra with further reduction of the diiron core.The spectral features obtained following reduction of CO-saturated CH2Cl2 solutions of I-bzNO2 are consistent with the initial species in the form of the S-dissociated hexacarbonyl species and the initial one-electron reduction product having re-coordination of the bridging S-atom.The explanation of the 1690 cm −1 band remains open.The very low wavenumber of the band is inconsistent with its assignment to a bridging CO group as judged by the electron richness of the diiron centre based on the wavenumbers of the terminally-bound ν(CO) modes. Reduction of I-bzNO 2 in the Presence of Lutidinium, HLut + The thiolato-bridged diiron carbonyl compounds are well known catalysts of proton reduction, albeit at high thermodynamic cost.The reduction of I-bzNO 2 in the presence of an excess of HLut + at potentials sufficiently positive to avoid reduction of the diiron centre are shown in Figure 9a.The selection of HLut + as acid is based on the non-coordinating character of the conjugate base, Lut.The ν(CO) spectra show differential absorption spectra similar to those obtained in experiments conducted in the absence of acid (Figure 6a) and these indicate that there is a small shift of the ν(CO) band profile to lower wavenumbers.In the absence of acid conversion of the pendent nitrobenzenethe group to the radical anion is associated with a 5.5 cm −1 shift of the two intense ν(CO) bands.In these experiments a shift of 3.5 cm −1 is observed.While small, this difference is significant considering the 2.5 cm −1 difference in wavenumbers of the corresponding bands of I-bzNO 2 and I-bzNH 2 .Under the conditions of the experiment reduction of the pendent nitrobenzene group will be a 6e, 6H + process leading to formation of I-bzNH 2 which may then engage in protonation equilibria giving [I-bzNH 3 ] + .The reaction of the nitro group is indicated by the depletion of the NO 2 stretching bands without the appearance of the corresponding bands of the radical anion (Figure 6a).Consumption of the acid during reduction is indicated by depletion bands of LutH + (1655, 1627, 1238 cm −1 ) and growth bands due to Lut (1655, 1627, 1238 cm −1 ).If the potential is returned to 0 V at this stage of the reaction, there is no recovery of the starting material.The slow rate of depletion of the starting complex relative to that observed in experiments conducted in the absence of acid (Figure 8) is due to the slow rate of ion diffusion in the thin layer geometry of the IR-SEC cell.This, of course, leads to more pronounced effects for the multi-electron process. is due to the slow rate of ion diffusion in the thin layer geometry of the IR-SEC cell.This, of course, leads to more pronounced effects for the multi-electron process.Re-oxidation at 0 V after the initial reduction at −0.9 V does not lead to any significant spectral changes (Figure S5). If, after initial reduction, a potential is applied sufficiently to reduce the diiron centre then the ν(CO) bands of the initial product are rapidly replaced by broad, lower-wavenumber bands centred about 1950 cm −1 (Figure 9b).Reduction of the diiron centre is associated with additional, though limited, loss of LutH + .If the initial conversion of the nitrobenzene fragment of I-bzNO2 to [I-bzNH3] + , a net 6e, 7H + process, then the spectral changes are consistent with the further consumption of 2-3 protons/complex.Neither the spectral response nor the current response during reduction is consistent with there being significant electrocatalytic proton reduction. The sensitivity of the protonation chemistry of the reduced forms of I-bzNO2 is of interest both Re-oxidation at 0 V after the initial reduction at −0.9 V does not lead to any significant spectral changes (Figure S5). If, after initial reduction, a potential is applied sufficiently to reduce the diiron centre then the ν(CO) bands of the initial product are rapidly replaced by broad, lower-wavenumber bands centred about 1950 cm −1 (Figure 9b).Reduction of the diiron centre is associated with additional, though limited, loss of LutH + .If the initial conversion of the nitrobenzene fragment of I-bzNO 2 to [I-bzNH 3 ] + , a net 6e, 7H + process, then the spectral changes are consistent with the further consumption of 2-3 protons/complex.Neither the spectral response nor the current response during reduction is consistent with there being significant electrocatalytic proton reduction. The sensitivity of the protonation chemistry of the reduced forms of I-bzNO 2 is of interest both because the starting complex has CO substitution of the bridging S atom (Figure 8) and because it is well known that CO is known to inhibit electrocatalytic proton reduction, both of the enzyme and of different of the model compounds.As is evident from the depletion bands in the ν(CO) region, mixtures of I-bzNO 2 and LutH + in CH 2 Cl 2 under elevated pressures of CO are in the open hexacarbonyl form (Figure 10).Reduction of the pendent nitrobenzene fragment proceeds with product bands indistinguishable from those obtained in experiments conducted in the absence of CO (Figure 10a).In this case, reduction of the nitrobenzene fragment and its conversion to the aniline analogue is accompanied by coordination of the bridging S atom and elimination of CO.The stoichiometry of LutH + consumption appears to be similar irrespective of whether the starting I-bzNO 2 complex is in the closed pentacarbonyl or open hexacarbonyl forms.As expected for the [I-bzNH 3 ] + product, no spectral changes are associated with the application of a re-oxidising potential (Figure 10b).Application of potentials sufficient to reduce the diiron core do not lead to significant electrocatalysis. Discussion Details of the redox chemistry of I-bzNO2 and I-bzNH2 and the interactions between the redoxaccessible species with CO and LutH + have been explored using a combination of electrochemical and in-situ spectroscopic methods.A summary of the chemistry is given in Scheme 2. Identification of the species derived from I-bzNO2 and I-bzNH2 is based primarily on the ν(CO) spectra where this is supplemented by their bands due to the NO2 stretching modes and EPR spectra of the one-electron reduced form of I-bzNO2.These studies have allowed unambiguous assignment of the first reduction of I-bzNO2 to a ligand centred process.This leads to an increase in the electron richness of the bridging S atom and this is reflected by a 5-6 cm −1 shift of the ν(CO) band profile to lower wavenumbers.The I-bzNH2 compound has a ν(CO) band profile intermediate between that of I-bzNO2 and [I-bzNO2] − .The small resultant shift in the wavenumbers of the ν(CO) bands reflect subtle changes in the electronic structure of the diiron core which can be translated into dramatic changes in the relative stability of different reaction products.This is illustrated by competitive binding of CO and the pendent thioether.The equilibrium constant for the reaction of the thioether-bound form of Discussion Details of the redox chemistry of I-bzNO 2 and I-bzNH 2 and the interactions between the redox-accessible species with CO and LutH + have been explored using a combination of electrochemical and in-situ spectroscopic methods.A summary of the chemistry is given in Scheme 2. Identification of the species derived from I-bzNO 2 and I-bzNH 2 is based primarily on the ν(CO) spectra where this is supplemented by their bands due to the NO 2 stretching modes and EPR spectra of the one-electron reduced form of I-bzNO 2 .These studies have allowed unambiguous assignment of the first reduction of I-bzNO 2 to a ligand centred process.This leads to an increase in the electron richness of the bridging S atom and this is reflected by a 5-6 cm −1 shift of the ν(CO) band profile to lower wavenumbers.The I-bzNH 2 compound has a ν(CO) band profile intermediate between that of I-bzNO 2 and [I-bzNO 2 ] − .The small resultant shift in the wavenumbers of the ν(CO) bands reflect subtle changes in the electronic structure of the diiron core which can be translated into dramatic changes in the relative stability of different reaction products.This is illustrated by competitive binding of CO and the pendent thioether.The equilibrium constant for the reaction of the thioether-bound form of the 2Fe3S pentacarbonyl complexes with CO to give the substitution product with CO replacing the thioether strongly favours the pentacarbonyl, thioether-bound form for I-bzNO 2 but strongly favours the hexacarbonyl, thioether dissociated form for [I-bzNO 2 ] − .The distribution of products formed following metal-based reduction is expected to be similarly sensitive to the electronic structure of the core and this may be critical in determining the productivity of this general class of compound in catalytic reactions at strongly reducing potentials.The reduction of I-bzNO2 in the presence of LutH + reveals a well-defined 6 electron, 6 or 7 proton reaction with formation mostly of [I-bzNH3] + , where the shift of the band profile by ca. 4 cm −1 is approximately midway between that of I-bzNO2 and I-bzNH2 (ca.2.5 cm −1 ).After conversion of I-bzNO2 to [I-bzNH3] + further reduction of the diiron subsite leads to further consumption of protons (2-3 equivalents) in a reaction which is unlikely to be catalytic.In this case the increases in current response with the addition of acid to I-bzNO2 involves stoichiometric consumptions of protons unless the applied potential is greater than that required for reduction of the diiron subsite.In this case the sigmoidal current response observed for reduction of I-bzNH2 with acid (Figure 2) is most likely due to stoichiometric reaction between protons and the initial reduction products.More reducing potentials are needed to obtain an electrocatalytic response.It is interesting to note that despite the different starting species, the reaction products obtained following reduction of I-bzNO2 in the presence of acid are the same for reactions conducted under elevated pressures of N2 or CO.The reduction of I-bzNO 2 in the presence of LutH + reveals a well-defined 6 electron, 6 or 7 proton reaction with formation mostly of [I-bzNH 3 ] + , where the shift of the band profile by ca. 4 cm −1 is approximately midway between that of I-bzNO 2 and I-bzNH 2 (ca.2.5 cm −1 ).After conversion of I-bzNO 2 to [I-bzNH 3 ] + further reduction of the diiron subsite leads to further consumption of protons (2-3 equivalents) in a reaction which is unlikely to be catalytic.In this case the increases in current response with the addition of acid to I-bzNO 2 involves stoichiometric consumptions of protons unless the applied potential is greater than that required for reduction of the diiron subsite.In this case the sigmoidal current response observed for reduction of I-bzNH 2 with acid (Figure 2) is most likely due to stoichiometric reaction between protons and the initial reduction products.More reducing potentials are needed to obtain an electrocatalytic response.It is interesting to note that despite the different starting species, the reaction products obtained following reduction of I-bzNO 2 in the presence of acid are the same for reactions conducted under elevated pressures of N 2 or CO.Clearly the reduction of the nitrobenzene fragment to protonated aniline proceeds similarly for the thioether bound, pentacarbonyl and thioether dissociated, hexacarbonyl forms and the equilibrium for the protonated aniline adduct lies strongly in favour of the thioether bound, pentacarbonyl form.The reaction is rapid, at least on the timescale of the IR-SEC experiments as no additional intermediate species were observed. The behaviour of I-bzNO 2 on reduction in the presence of protons provides a more solid basis for the interpretation of the voltammetric response of Fe 2 -adt(bzNO 2 ) [22].In light of the voltammetry of nitrobenzene and thioether-linked nitrobenzene ligands examined herein, together with the markedly different reversibility of the primary reduction process of the complex relative to that of the aniline adduct, Fe-adt(bzNH 2 ) (Figure 2 of ref. [22]), are more consistent with the first wave being dominated by reduction at the nitrobenzene subsite.Further, it is significant that the current associated with the first reduction process saturates at low proton concentrations.It is more likely that stoichiometric electron/proton reactions following reduction of nitrobenzene account for the purported electrocatalysis at milder potentials.At more reducing potentials, the assignment of any proton-dependent current increases to electrocatalysis becomes problematic owing to formation of species unrelated to the starting complex or heterogeneous electron transfer from insoluble reduction products [32].While, perhaps, the interpretation of the electrochemistry of Fe 2 -adt(bzNO 2 ) is in error, resolution of the details of the chemistry of reactive systems such as the dithiolato-bridged carbonyl compounds clearly requires the combination of techniques. The observation of stoichiometric, as opposed to electrocatalytic proton reduction following metal-based reduction of I-bzNO 2 in excess HLut + is significant as it suggests that electrocatalytic proton reduction of weak acids requires either the application of more strongly reducing potentials or reorganization of the parent complex.In this case the relationship between the catalytic species and the initial diiron compound is ill-defined.Similar comments apply in relation to studies of the purported electrocatalytic behaviour of a broader range of dithiolato-bridged diiron compounds at strongly reducing potentials, a situation which applies frequently when acetic acid is the proton source in non-aqueous solvents [33][34][35][36][37].In this contribution we highlight the importance of monitoring both the current response from the system and the extent of reduction of the pendent redox centre and the diiron centre of this important class of compound. General All manipulations were performed under inert atmosphere of N 2 using the Schlenk technique.Solvents were dried and distilled under N 2 following usual procedures (Na for toluene and hexane, Na/benzophenone for tetrahydrofuran and CaH 2 for acetonitrile).Chemical compounds were purchased from Aldrich (St. Louis, MO, USA) and were used as supplied without further purification.Micro-analysis was performed by Medac Ltd. (Egham, Surrey, UK).NMR spectroscopy was recorded on a JEOL Lambda 400 MHz (Tokyo, Japan) and FT-IR on a Shimadzu FTIR-8300 (Kyoto, Japan).A Bruker ECS 106 X-band spectrometer (Billerican, MA, USA) was used to collect EPR spectra and a Wilmad WG-810 electrolytic cell (Vineland, NJ, USA) assembly allowed in situ electrochemical generation of reduced compounds.EPR spectra were simulated with the aid of the EasySpin subroutines [24] which operate within a MatLab environment. Electrochemistry Cyclic voltammetry experiments were controlled using an Autolab PGSTAT 30 (Metrohm, Herisau, Switzerland) and digital simulations were performed using Digisim version 3.0 software (BASi, West Lafayette, IN, USA).Experiments were carried out in a three-compartment glass cell with a vitreous carbon disk as working electrode ( Spectroelectrochemistry Spectroelectrochemical (SEC) experiments were conducted using a purpose built cell previously described [25].All experiments employed a 3 mm diameter vitreous carbon working, silver pseudo-reference and platinum foil counter electrodes.The potentials of the SEC experiments are uncorrected and given relative to the silver pseudo-reference electrode.Solutions for SEC analysis were prepared under strictly anaerobic conditions either through the agency of a Vacuum Atmospheres glove box or using standard Schlenk techniques.The applied potential was controlled using a PAR model 362 potentiostat (Princeton Applied Research, Oak Ridge, TN, USA).A Powerlab 4/20 interface (ADInstruments, Dunedin, New Zealand) using EChem V1.5.2 or Chart V4.12 provided a means of setting the applied potential and monitoring the potential and current response during SEC experiments.IR spectra were obtained using a Bio-Rad FT175C FTIR equipped with a Ge/KBr beamsplitter and narrow band MCT detector (Hercules, CA, USA).Spectral subtraction and curve fitting were performed using either in-house programs or Grams/32 AI software (Galactic Industries Corporation, Salem, NH, USA).Final plots were generated using Igor Pro (version 5.04B, Wavemetrics, Lake Oswego, OR, USA). 2 mmol) was washed with hexane and then dissolved in THF (50 cm 3 ) under a dinitrogen atmosphere.CH 3 C(CH 2 S) 2 CH 2 SH (1.00 g, 6.0 mmol) was dissolved in THF (5 cm 3 ), added to the solution of NaH and stirred at room temperature.4-bromo-nitrobenzene (1.49 g, 7.4 mmol) was dissolved in THF (5 cm 3 ), added to the mixture and stirred at 60 • C overnight.Excess of sodium hydride was quenched by ammonium chloride-saturated water and the product was extracted with dichloromethane.The organic phase was dried (MgSO 4 ) and purified by flash chromatography (hexane/chloroform 4:1) to give a yellow solid (0.86 g, 3.0 mmol, 50%). 1 ] (0.97 g, 1.9 mmol) was dissolved in toluene (50 cm 3 ).CH 3 C(CH 2 S) 2 CH 2 SC 6 H 4 -p-NO 2 (0.46 g, 1.6 mmol) was added to the solution.The dark green mixture turned red brown when it was heated at 90 • C for 90 min.The solvent was removed, and the compound was purified by flash chromatography under dinitrogen (diethyl ether/hexane 4:1) to give a red-brown powder (0.28 g, 0.52 mmol, 32%).IR: ν max /cm −1 (CO) 1933, 1988 and 2052 cm −1 (acetonitrile); CCDC 1898467 contains the supplementary crystallographic data for I-bzNH 2 .These data can be obtained free of charge via http://www.ccdc.cam.ac.uk/conts/retrieving.html (or from the CCDC, 12 Union Road, Cambridge CB2 1EZ, UK; Fax: +44 1223 336033; E-mail: deposit@ccdc.cam.ac.uk). Synthesis Crystals were orange-brown blocks.A single crystal of size 0.40 × 0.29 × 0.24 mm (corresponding to the c, a and b axes) was selected, mounted on a glass fiber and coated in silicone grease for photographic examination and the data collection.Diffraction data were measured on an Enraf-Nonius CAD4 diffractometer with monochromated Mo Kα radiation.Cell parameters were refined using least-squares methods from the settings of 25 reflections in the range 10 to 11 degrees in theta.Each reflection was centred in 4 orientations.3517 unique reflections were collected to theta of 25 • , with 2632 having intensities greater than 2σ I .3 reflections were chosen to monitor any decay in intensities (measured every 10,000 s) and any change in orientation (measured every 300 reflections).There was slight decay of the crystal with an overall decrease in intensities of 1.11%.Diffractometer data showed a monoclinic crystal in the space group P2 1 /c (equivalent to no. 14) with cell dimensions a = 8.797(3) Å, b = 12.528(4) Å, c = 18.181(4)Å, β = 94.30(2)Å and V = 1998.0(10)Å 3 .The data were corrected for Lorentz-polarisation effects, for decay of the intensities, for absorption by semi-empirical psi-scan methods, and for negative intensities by Bayesian statistical methods.The structure was solved using direct methods in the SHELXS-97 program and refined on F 2 using full-matrix least-squares procedures in SHELXL-97.The refinement process showed one [Fe 2 (CO) 5 {CH 3 C(CH 2 S)CH 2 SC 6 H 4 -p-NH 2 }] molecule in the unit cell.All non-hydrogen atoms were refined with anisotropic thermal parameters.Hydrogen coordinates were included in idealised positions and subsequently refined freely.All hydrogen isotropic thermal parameters were refined freely.Convergence was reached with R 1 = 0.043 and wR 2 = 0.066 for all data weighted w = 1/[σ 2 (F o 2 ) + (0.029P) 2 ] where P = (F o 2 + 2F c 2 )/3; for the 2632 observed data R 1 = 0.027 and wR 2 = 0.059.In the final difference map the residual electron density was 0.30 e − /Å 3 for the largest peak, which was close to an S atom. Conclusions A combination of electrochemical and spectroelectrochemical measurements have shown that reduction of the nitrobenzene fragment of I-bzNO 2 occurs at potentials more positive than that of its diiron core.Based on shifts of the ν(CO)-band profile the impact of nitrobenzene reduction on the diiron core is approximately 8% of that for a metal-centred reduction.At mild potentials electrochemically-generated [I-bzNO 2 ] − reacts rapidly with HLut + to give I-bzNH 2 and [I-bzNH 3 ] + . Competitive binding experiments between CO and the pendant thioether of the oxidised and reduced forms of I-bzNO 2 and the neutral and protonated forms of I-bzNH 2 show that thioether displacement is favoured when more electron withdrawing groups are constituents of the thioether.Nitrobenzene reduction or conversion to aniline shifts the equilibrium between the open, thioether dissociated, and closed forms.These results demonstrate that redox reactions of a remote site can be used to drive ligand dissociation and binding at the diiron centre. Considering the many reports of electrocatalytic proton reduction of diiron compounds in the presence of weak acids, it is perhaps surprising that reduction of I-bzNO 2 in the presence of excess HLut + at potentials sufficient to reduce the diiron core does not result in catalytic proton reduction.Generally, the purported electrocatalytic behaviour is based only on the current response of the system where the form and phase of the catalytic species is not defined.This study highlights the value of spectroelectrochemical techniques when elucidating the chemistry of reactive, and reacting, systems.S1.EasySpin EPR simulation parameters for [I-bzNO 2 ] − .Table S2.Determination of the free energy activation (∆G ‡ ) parameter in CDCl 3 ; T c is the coalescence temperature; ∆ν ab is determined from the slow limit spectra.Cif and checkcif files. . The hyperfine coupling constants aN, a2,6, a3,5 and aR have values corresponding to 26.7, 9.46, 3.24, 0.64 MHz.The transient character of the reduced product was confirmed by the observed loss of the EPR product within ca. 1 min of switching the potentiostat to open circuit. Figure 3 . Figure 3. X-band EPR spectra of in-situ reduced I-bzNO2 and simulated spectrum.The simulated spectrum was calculated using the EasySpin suite of routines[24].Details of the simulation parameters are given in TableS1. Figure 3 . Figure 3. X-band EPR spectra of in-situ reduced I-bzNO 2 and simulated spectrum.The simulated spectrum was calculated using the EasySpin suite of routines[24].Details of the simulation parameters are given in TableS1. Figure 4 . Figure 4. IR-SEC measurements of the reduction (a and b) and re-oxidation (c) of I-bzNH2.The applied potential and current response of the cell during the IR-SEC experiment is shown in (d).For this, and subsequent, figures the initial spectrum of each block is shown in green and the final spectrum in blue. Figure 4 . Figure 4. IR-SEC measurements of the reduction (a,b) and re-oxidation (c) of I-bzNH 2 .The applied potential and current response of the cell during the IR-SEC experiment is shown in (d).For this, and subsequent, figures the initial spectrum of each block is shown in green and the final spectrum in blue. Figure 5 . Figure 5. IR-SEC recorded during (a) reduction and (b) re-oxidation of I-bzNO2 in CH2Cl2 (2 mM, 0.2 M TBAPF6).The applied potential and current response of the cell during the IR-SEC experiment is shown in (c). Figure 5 . Figure 5. IR-SEC recorded during (a) reduction and (b) re-oxidation of I-bzNO 2 in CH 2 Cl 2 (2 mM, 0.2 M TBAPF 6 ).The applied potential and current response of the cell during the IR-SEC experiment is shown in (c). Inorganics 2019, 7, x FOR PEER REVIEW 8 of 19 that of the final spectrum of Figure 6b.Re-oxidation in these experiments lead to similarly poor recovery of the starting material.These experiments show that the metal centred reduction of I-bzNO2 is both electrochemically and chemically irreversible.Most likely this proceeds through a twoelectron reduction of the diiron centre with rearrangement and/or ligand dissociation. Figure 6 . Figure 6.IR-SEC measurements of the first (a) and second (b) reduction processes of I-bzNO2 followed by reoxidation at mild (c) and more forcing (d) conditions.The applied potential and current response of the cell during the IR-SEC experiment is shown in (e). Figure 6 . Figure 6.IR-SEC measurements of the first (a) and second (b) reduction processes of I-bzNO 2 followed by reoxidation at mild (c) and more forcing (d) conditions.The applied potential and current response of the cell during the IR-SEC experiment is shown in (e). Figure 7 . Figure 7. IR-SEC of I-bzNH2 (2 mM, CH3CN, 0.2 M TBAPF6) with CO (ca.0.3 MPa).The spectra show the reduction (a) and re-oxidation at increasingly positive potentials (b-d).The applied potential and current response of the cell during the IR-SEC experiment is shown in (e). Figure 7 . Figure 7. IR-SEC of I-bzNH 2 (2 mM, CH 3 CN, 0.2 M TBAPF 6 ) with CO (ca.0.3 MPa).The spectra show the reduction (a) and re-oxidation at increasingly positive potentials (b-d).The applied potential and current response of the cell during the IR-SEC experiment is shown in (e). 19 Figure 8 . Figure 8. IR-SEC spectra recorded during (a) and (b) the reduction of I-bzNO2 (2 mM, 0.2 M TBAPF6) in CH2Cl2 saturated with CO (0.3 MPa) and (c) re-oxidation.The time between spectra was 2.68 s.The applied potential and current response of the cell during the IR-SEC experiment is shown in (d).2.6.Reduction of I-bzNO2 in the Presence of Lutidinium, HLut + The thiolato-bridged diiron carbonyl compounds are well known catalysts of proton reduction, albeit at high thermodynamic cost.The reduction of I-bzNO2 in the presence of an excess of HLut + at potentials sufficiently positive to avoid reduction of the diiron centre are shown in Figure 9a.The selection of HLut + as acid is based on the non-coordinating character of the conjugate base, Lut.The Figure 8 . Figure 8. IR-SEC spectra recorded during (a) and (b) the reduction of I-bzNO 2 (2 mM, 0.2 M TBAPF 6 ) in CH 2 Cl 2 saturated with CO (0.3 MPa) and (c) re-oxidation.The time between spectra was 2.68 s.The applied potential and current response of the cell during the IR-SEC experiment is shown in (d). Figure 9 . Figure 9. IR-SEC spectra recorded during the reduction of I-bzNO2 in the presence of HLut + at (a) −0.9 V and (b) −1.3 V. Weak ν(CO) depletion bands (denoted by an asterisk) are due a small concentration of the hexacarbonyl species.Re-oxidation at 0 V after the initial reduction at −0.9 V does not lead to any significant spectral changes (FigureS5). Figure 9 . Figure 9. IR-SEC spectra recorded during the reduction of I-bzNO 2 in the presence of HLut + at (a) −0.9 V and (b) −1.3 V. Weak ν(CO) depletion bands (denoted by an asterisk) are due a small concentration of the hexacarbonyl species.Re-oxidation at 0 V after the initial reduction at −0.9 V does not lead to any significant spectral changes (FigureS5). 19 Scheme 2 . Scheme 2. Summary of the chemical and redox interconversions of I-bzNO2 and I-bzNH2.The metal or ligand-based character of the redox reactions are emphasised using red text.The spectra of different of the species have been obtained by subtraction, and the wavenumbers for the bands in the subtraction spectra are given.In some cases, the absence of a good reference spectrum leads to distortions of the baseline and the introduction of features due to water vapour.Note that the initial spectra obtained following reduction of I-bzNH2 under CO have a band profile and wavenumber consistent with the generation of [I-(bzNH2)CO] − in chemistry analogous to that of Fe2(μ-S(CH2)3S)(CO)6[29]. Scheme 2 . Scheme 2. Summary of the chemical and redox interconversions of I-bzNO 2 and I-bzNH 2 .The metal or ligand-based character of the redox reactions are emphasised using red text.The spectra of different of the species have been obtained by subtraction, and the wavenumbers for the bands in the subtraction spectra are given.In some cases, the absence of a good reference spectrum leads to distortions of the baseline and the introduction of features due to water vapour.Note that the initial spectra obtained following reduction of I-bzNH 2 under CO have a band profile and wavenumber consistent with the generation of [I-(bzNH 2 )CO] − in chemistry analogous to that of Fe 2 (µ-S(CH 2 ) 3 S)(CO) 6[29].
2019-04-06T18:42:06.485Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "41404ddfe5d52ab0cc9a4c7fc38fbec06dce1be5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6740/7/3/37/pdf?version=1552034844", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "41404ddfe5d52ab0cc9a4c7fc38fbec06dce1be5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
256056268
pes2o/s2orc
v3-fos-license
Determination of Free Histidine in Complex Hair Care Products with Minimum Sample Preparation Using Cation-Exchange Chromatography and Post Column Derivatization In this communication, we describe the first analytical method for the determination of free histidine in hair care products (shampoos and conditioners). Cation-exchange chromatography combined with postcolumn derivatization and fluorimetric detection enabled the accurate (recovery: 83.5–114.8%) and precise (2.4–5.6% RSD) determination of free histidine without matrix interferences at concentration levels down to 1.5 mg kg−1. Real commercially available samples were found to contain the amino acid at levels ranging between 70 and 535 mg kg−1. Introduction Recent published research has proven experimentally that the presence of copper traces in human hair promotes damage (color change, split ends, loss of shine, etc.) [1,2] catalyzed by UV exposure [3]. Copper ions mainly from exogenous tap water sources were determined in the sulfur-poor endo-cuticle [4], while possible action mechanisms were identified ranging from the formation of reactive oxygen species to participation in the oxidation of lipids [5]. A potential solution to the latter issues is the addition of copper chelators to daily hair care products (shampoos and conditioners) that proved to improve hair health by reducing the effect of UV exposure [6]. Well-known metal chelators such as N,N'-ethylenediamine disuccinic acid were found to be effective for copper removal in shampoos [3] but they were not applicable for conditioners, causing instability of the products due to their negatively charged functional groups. Although not an obvious alternative, the amino acid histidine [6] offers (i) increased chelating activity due to the imidazole group compared with other amino acids [7], (ii) effective chelation of copper in the presence of a significant excess of calcium and (iii) effective chelation at the typical pH of shampoos and conditioners [8]. From an analytical chemistry point of view, hair care products are considered to be complicated matrices containing numerous organic and inorganic compounds, thus requiring extra sample cleanup steps prior to the end-point analysis. Representative recent analytical applications in hair care products are presented in Table 1 [9][10][11][12][13][14][15][16][17][18]. As can be seen in the examples of Table 1, solid-or liquid-phase extraction steps are quite common even when sophisticated LC-MS instrumentation is employed, while dry ashing/wet digestion are necessary for the determination of inorganic analytes. In the present study, we report the straightforward determination of free histidine in hair care products (shampoos and conditioners) using cation-exchange chromatography coupled to specific postcolumn derivatization and fluorimetric detection. To the best of Molecules 2023, 28, 888 2 of 8 our knowledge, this is the first analytical report for histidine in this type of matrix. The method is based on the cation-exchange separation of the analyte from the complicated substrates followed by an online postcolumn reaction with o-phthalaldehyde in the absence of nucleophilic compounds [19,20]. Validation experiments confirmed the absence of matrix effects and allowed the adaptation of very simple and convenient sample processing (dissolution-filtration-dilution). Investigation of the Separation Conditions Despite the negative charge of the carboxylic acid moiety, histidine can be retained on cation-exchange columns through the positively charged nitrogen atoms in its molecule. Although previous studies of our group have shown rather fast elution of the analyte, the combination with selective postseparation detection has proven to be adequate for the analysis of complex matrices [20]. It was therefore necessary to examine the chromatographic profiles of the samples prior to selection of the final separation conditions. Such preliminary investigations are necessary in order to avoid "surprises" during real sample analyses. Three histidine-containing shampoos and three histidine-containing conditioners were randomly selected and were analyzed independently using 5 mmol L −1 HNO 3 as the mobile phase. Only a single peak corresponding to histidine was recorded in all cases, despite the complex background of the samples. Similar results were obtained by our group for biological material as well and can be attributed to the specific postcolumn derivatization reaction [20]. Increase in the acidity of the mobile phase led to elution of the analyte at the void volume of the HPLC and also stressed the analytical column in terms of pH tolerance. On the other hand, lower acidity (3 mmol L −1 HNO 3 ) increased the separation cycle without any other obvious gain. No obvious gain was also recorded by using 5-10% acetonitrile in the mobile phase. The PCD conditions (concentration of the reagent, pH, buffer, temperature, etc.) were adopted from previous experimental work from our group without changes as they provided adequate selectivity [20]. A graphical depiction of the HPLC-PCD setup (including the derivatization reaction) can be seen in Figure 1. Molecules 2023, 28, x FOR PEER REVIEW 3 of 8 depiction of the HPLC-PCD setup (including the derivatization reaction) can be seen in Figure 1. Investigation of the Extraction Conditions Histidine is a polar compound, and its amino groups can be protonated in acidic media. For this reason, 5 mmol L −1 of HNO3 (mobile phase) was used for the extraction of the analyte in all cases, in this way matching the composition of the mobile phase and avoiding potential peak distortion phenomena. The extraction was promoted ultrasonically, and the extraction recovery was studied at time intervals of 15 and 30 min. The time of 30 min did not improve the extraction recovery, and it was concluded that 15 min was adequate to recover free histidine from the matrix. Following extraction, the samples were treated according to the Sample Preparation section (centrifugation, filtration and dilution). No foaming of the samples was observed at any stage. A graphical depiction of the sample treatment workflow is presented in Figure 2. Investigation of the Extraction Conditions Histidine is a polar compound, and its amino groups can be protonated in acidic media. For this reason, 5 mmol L −1 of HNO 3 (mobile phase) was used for the extraction of the analyte in all cases, in this way matching the composition of the mobile phase and avoiding potential peak distortion phenomena. The extraction was promoted ultrasonically, and the extraction recovery was studied at time intervals of 15 and 30 min. The time of 30 min did not improve the extraction recovery, and it was concluded that 15 min was adequate to recover free histidine from the matrix. Following extraction, the samples were treated according to the Sample Preparation section (centrifugation, filtration and dilution). No foaming of the samples was observed at any stage. A graphical depiction of the sample treatment workflow is presented in Figure 2. Study of the Matrix Effect Due to the complexity of the samples, it was important to investigate the potentia postextraction matrix effects [21,22]. The postextraction matrix effect was studied individually for the shampoo and conditioner products, using samples that did not contain endogenous histidine (according to the manufacturers' labels). In brief, pooled shampoo and conditioner samples (n = 6 for each pooled matrix) were prepared individually and were treated as described in the experimental section. Each pooled matrix was spiked after extraction with histidine in the range of 5-20 mg kg −1 . The matrix effect was calculated as the relative error of the slope of the individual matrix-matched calibration curves compared to the slope of the aqueous one. As shown in Table 2, due to the combination of cation-exchange chromatography and selective PCD, the percent postextraction matrix effects were negligible, being −1.1% and +3% for histidine in the shampoo and conditioner samples, respectively. Study of the Matrix Effect Due to the complexity of the samples, it was important to investigate the potential postextraction matrix effects [21,22]. The postextraction matrix effect was studied individually for the shampoo and conditioner products, using samples that did not contain endogenous histidine (according to the manufacturers' labels). In brief, pooled shampoo and conditioner samples (n = 6 for each pooled matrix) were prepared individually and were treated as described in the experimental section. Each pooled matrix was spiked after extraction with histidine in the range of 5-20 mg kg −1 . The matrix effect was calculated as the relative error of the slope of the individual matrix-matched calibration curves compared to the slope of the aqueous one. As shown in Table 2, due to the combination of cation-exchange chromatography and selective PCD, the percent postextraction matrix effects were negligible, being −1.1% and +3% for histidine in the shampoo and conditioner samples, respectively. Analysis of Real Samples The hair care samples were commercially available shampoo and conditioner products and were selected based on their labels' claims. In the products that did not refer to histidine in their ingredients, the analyte was not detected. On the other hand, free histidine was quantified in all claimed samples, at concentration levels ranging between ca. 70 and 535 mg kg −1 , as shown in Table 3. The dilution factor of each sample is shown in Table 3. Representative chromatograms from a standard solution and both shampoo and conditioner products are depicted in Figure 3. The accuracy of the method was evaluated by spiking experiments in the samples at two final concentration levels of 5 and 10 mg kg −1 . As can be seen in Table 4, the recoveries were satisfactory and ranged between 83.5 and 114.8%. Table 3. Histidine content in shampoo and conditioner products (n = 3). Shampoo Samples Histidine ( histidine in their ingredients, the analyte was not detected. On the other hand, free histi-dine was quantified in all claimed samples, at concentration levels ranging between ca. 70 and 535 mg kg −1 , as shown in Table 3. The dilution factor of each sample is shown in Table 3. Representative chromatograms from a standard solution and both shampoo and conditioner products are depicted in Figure 3. The accuracy of the method was evaluated by spiking experiments in the samples at two final concentration levels of 5 and 10 mg kg −1 . As can be seen in Table 4, the recoveries were satisfactory and ranged between 83.5 and 114.8%. Table 3. Histidine content in shampoo and conditioner products (n = 3). Reagents and Solutions All reagents used in this study were commercially available and of analytical grade. The following reagents were used: histidine (Sigma-Aldrich), o-phthalaldehyde (OPA, Fluka), HNO 3 (Fluka), KH 2 PO 4 (Merk) and NaOH (Merck). Doubly deionized water was produced by a Milli-Q system (Millipore, Thessaloniki, Greece). The mobile phase consisted of 5 mmol L −1 HNO 3 and was prepared daily, including ultrasonic degassing. The standard stock solution of histidine was prepared daily at the concentration level of 1000 µmol L −1 by dissolving accurately weighed amounts in the mobile phase. Working standard solutions were prepared by serial dilutions of the stock solution in the same solvent. The derivatizing reagent (OPA) was prepared at a concentration of 10 mmol L −1 . Phosphate buffer was also prepared daily at 50 mmol L −1 and was adjusted to the pH value of 9 by addition of 2.0 mol L −1 NaOH solution. HPLC-PCD Procedure Standards or samples of 20 µL were separated through a cation-exchange column by isocratic elution (5 mmol L −1 HNO 3 ) at a flow rate of 1.0 mL min −1 and a column temperature of 60 • C. The eluted compounds were mixed online with the PCD reagents at a flow rate of 0.25 mL min −1 for each stream. The derivatization reaction was allowed to proceed through a thermostated reaction coil (200 cm/60 • C), and the products were detected using the fluorescence detector at λ ex /λ em = 360/440 nm. Preparation of Samples Two types of commercially available hair care products, namely, shampoos and conditioners, were purchased from local markets and were stored as for everyday use. A representative amount of 1.0 g of each product was weighed in a 15 mL plastic centrifuge tube and dispersed in 10 mL of HNO 3 5 mmol L −1 . The extraction of histidine was promoted ultrasonically for 15 min. The mixture was centrifuged at 2000 rpm for 5 min, and the supernatant was filtered through a syringe filter. Each sample was analyzed at a preliminary step in order to determine the necessary dilution factor. Depending on the concentration of free histidine in the filtered solutions, the samples were appropriately diluted in the mobile phase prior to HPLC-PCD analysis. The validation of the method was based on histidine-free shampoo and conditioner matrices following the above pretreatment. Each sample was processed in triplicate. Conclusions In this study, the first analytical method was validated for the determination of free histidine in hair care products. Cation-exchange chromatography coupled to online postcolumn derivatization offers the determination of histidine in shampoo and conditioner samples, without matrix effects and with minimum sample preparation, at levels ranging between 70 and 535 mg kg −1 .
2023-01-22T06:16:08.997Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "e9414962e5a4ae7973398c833b2af51cae262842", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/2/888/pdf?version=1673857858", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee9280b562dc2cc204f6834cde4e28d921529609", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
129841902
pes2o/s2orc
v3-fos-license
INFLUENCE OF HYDRIC BALANCE ON ENVIRONEMNTAL PHYTOINDICATION OF RUDERAL FRAGMENTS The aim of this present work is to verify the action of the hydric balance on phyto-sociological and physic-ecological factors in plant from ruderal fragment. It was determined the leaf number (NF), dry weight of leaves (PSF), foliar area (AF) and the weight of the total dry substance (PMSt). With the values of AF and PMST, the growth analysis parameters had been calculated. These phytoecological factors are: index of foliar area (IAF), reason of the foliar area (RAF), absolute growth rate (TCA), production of dry substance rate (TPMS), relative growth rate (TCR), assimilating net rate (TAL). Also the phyto-sociological factors had been calculated: relative density (DR), absolute frequency (FA), relative frequency (FR), relative domain (DoR), importance value index (IVI) and relative importance (IR).In accordance with the hydric balance analysis in the region (Vassouras, Rio de Janeiro, 43° 07'' N/22° 08' S), it was obser ved that the analyzed locality has a period of drought that corresponds from May to September, when the temperature raises. The period of hydric excess is observed from October to January and in April, and in December, the summer time. The total annual average of the precipitation and the average temperature for the locality are, respectively, 1264,7 mm and 25,6° C. It has a hydr ic annual deficit of 30 mm and an excess of 372 mm.A cause that modifies the use of the LBC (folíolo-pecíolo angle class) in natural conditions is the environmental factors, as temperature, hydric deficit and nutritional state of the plant. Low wf had modified some morphologic parameters in ruderal plants, including the size and the total number of leaves in blossom. The reference value (r) was determined as 30 mm, based on comments on the leaves tissue formation and on growth curves. It is verified that the duration of plastocrono (p/r) was 0,61 per day and the rate of the leaves appearance in the launching is (r/p) 0,83 per day, for hydric excess in the plants under 0,12 deficit met p/r = 0,89 per day and r/p = day. INTRODUCTION Plants show diverse types of environmental adaptation (modulation and modification) and genetic (evolution), as the amount and the quality of the predominant radiation.The modulation adaptations occur quickly and they are temporary.When the original situation returns, the initial behavior is soon reacquired.Examples of photo-modulation are the movements of the leaves by which the exposition of the leaf surface to the radiation to which is displayed is increased and other photo-tropical fotonásticos and fototáticos movements, (FINCH -SAVAGE & ELSTON, 1982). The modifying answers adapt the plants to the average conditions of radiation, during the growth period.With this, the structural characteristics of the plants are remained.Plants adapted to the shade develop, in the chloroplasts, high concentrations of chlorophyll, and accessory pigments.exposed to strongest radiations develop an efficient axial system of water conduction.Leaves have some cell layers in mesophyll and cells with abundant chloroplasts.In consequence of the structural adaptations and the active metabolic processes, the adapted plants exposed to the intense light produce greater amount of dry substance.It contains a higher content of energy, and its fertility is bigger (frequency of budding, formation of seeds, incomes of fruits).In contrast, plants adapted to the tenuous light are distinguished by the reduced production of dry substance, efficient protein synthesis, low breath and water renewal.These characteristics enable them to grow in places of modest amounts of energy. The evolution adaptations to the available radiation are based on genotypic changes.They determine the differences, sometimes very notable ones, which appear in the ecology of distribution of various species and eco-type of plants.The modulation, modification and evolution adaptations are not excluded mutually, but they overlap.Exact adjustments guarantee the biggest possible use of the available radiation.If the space occupied by a group of plants is densely filled, such as in the tropical forests, this is due to the fact that the light is completely The cellular elongation of vegetal organs in growth is regulated by the water availability in the ground.When environmental factors limit the foliar expansion, the processes directly related to light interception, temperature regulation, evapotranspiration -hydric balance and diffusion of Co2 are reached (FITTER & HAY, 1981).The hydric balance is a method to calculate the water availability in the ground for vegetal communities.It counts the precipitation before the potential evapotranspiration, taking into consideration the capacity of water storage in the ground.This water availability is a more correlated ecological factor with the geographic distribution of the vegetal species than the precipitation.The ground, natural water reservoir for the vegetation has a storage capacity that once saturated allows the percolating of the exceeding water for the table sheet.The water entrance is represented by the precipitation, while the main exit is the evapotranspiration.The hydric balance counts the precipitation before the potential evapotranspiration, considering a definitive value of the capacity of water storage in the ground.This is the maximum amount of water usable for plants that can be stored in the root zone.The root storage capacity is a characteristic of the plant, independent of the ground type. Evapotranspiration is the combined process of transpiration and evaporation that vegetation provides.The plant transpiration occurs through stomata and the cuticle of plants using water that its root system has absorbed throughout the soil profile explored. The evaporation corresponds to loss of water deposited on the surface of the plant and water in the soil.Evaporation is a physical process while perspiration is a biological process.In the latter case come into play physiological processes that control water loss by plants, which operate concurrently with the weather.The evapotranspiration rate is directly proportional to the energy balance of the evaporating surface and the removal of water molecules with that surface.Factors affecting evapotranspiration are the same for a water surface.In addition it affects in the transpiration of the opening of stomata.The larger the surface opening of the stomata, the greater the water loss by transpiration.The opening of stomata is a complex mechanism, which varies with plant species, but generally follows the availability of soil water, solar radiation intensity, plant metabolism and concentration solar plant metabolism and CO2 concentration.The stomata remain open as long as soil water availability.As there will be water restriction, the stomata will decrease its opening and it can be closed completely.If the restriction of water gradually increases, the plant mobilizes mechanisms to reduce perspiration. The aim of this present work is to verify the action of the hydric balance on phytosociological and physic-ecological factors in plant from ruderal fragment. MATERIALS AND METHODS The study area is about 6200 m2 and it is located in Vassouras, Rio de Janeiro. The city has its areas distributed in the mountain plateaus of Paraíba River middle basin, it altitude is around 600 meters and with longitude and latitude respectively, 43 ° 07'W and 22 ° 08'S.Among the vegetation that composes the ruderal fragment, there is the presence of the Malvaceae, Sidastrum micranthum (St.Hil.) Fryxell, Sida rhombifolia L. Sida carpinifolia L. and Sida cordifolia L. An automatic weather station was set up whose system of data acquisition by sensors at specific periods stores the information collected on magnetic tapes, which are then analyzed by a micro-computing system.It results in a final record of average temperature and humidity, solar radiation, net radiation, precipitation, wind speed. These data are used to determine the values of potential evaporation.We consider the energy balance for a column that extends from the ground to a reference height above the vegetation, where observations are made. This process can be described by the expression: Where, Rn is net radiation (Watt/m2) Le = latent heat of vaporization (Joule / kg), E = flow of water vapor (Kg/m2s), H = sensible heat flow (Watt/m2) , G = soil heat flow (Watt/m2), A = heat storage column (Joule/m2).The heat storage in the colony should be composed by terms representing the latent and sensible heat storage inside itself and the heat storage in the vegetation mass .The duration of storage, according to Thom (1977) is given by: The available energy at the surface is used to maintain the flow of sensible heat and latent heat.When the surface is saturated, the resistance becomes rc -is null, and the flow of water vapor occurs from the surface at its maximum value for the existing conditions, known as potential evaporation. The aerodynamic resistance to the flow of water vapor (latent heat flow), is described by the equation: The rate of evaporation LeE, the sensible heat flow H and friction velocity determine a measure of weather conditions across the stability length of Monin -Obukhov (L), given by the equation V = L +3 PCPT / kg (H + 0,07 L and E).The LeE equation can be used for the calculation of potential evaporation assuming rc = 0, however, the aerodynamic resistance would occur when evaporation reached its maximum rate would not be known because it is dependent on weather conditions, and these conditions are changed whenever some flow is changed.If taken for simplicity the equality ra = ran (where Ran is the aerodynamic resistance to neutral conditions), in fact values are established for resistance and flows, independent of weather conditions that may occur, which no longer maintains compliance with stability length that results from.Definitely functional relationships between variables would be lost. In a different route, and in correspondence to the method of determining the potential evaporation, air resistance, as well as the flow, is determined iteratively. It is a procedure in which each iteration provides a better agreement between the stability conditions created taking rc = 0.The variables that depend on these conditions, culminating with values compatible among the flows, resistance and stability length. The sample of the vegetation consisted of examples that had flowers and fruits. The collected material was pressed and dried by conventional procedures and after identification it has been deposited in the Herbarium of the Universidade Federal Rural do Rio de Janeiro (RBR).In the laboratory we proceeded to separate and count the leaves.In each plot were randomly sampled 50 leaves for the estimation of leaf area (AF).The leaf coverage area was established through planimetragem (planimeter type A OTT. Gayern Kempten, model 311).We also weighed the total dry matter of leaves collected in each plot.Thus, we determined the number of leaves (NF), leaves dry weight (PSF), leaf area (AF) and the total dry weight matter (PMST).From the values obtained for leaf area and dry weight, we calculated the parameters of growth analysis. These factors were called phytoecological and are: leaf area index (IAF), leaf area ratio (RAF), absolute growth rate (TCA), production rate of dry matter (TPMS), relative growth rate (TCR), net assimilation rate (TAL).In addition to these parameters phytosociological factors were also calculated: relative density (DR), RESULTS AND DISCUSSION The study area has a medium texture soil, sandy clay and acid.The pasture fields correspond to the area originally covered by dense forests on the highlands and lowlands in the swampy lowlands.The region is crossed by the Santana river, which originally had shallow bed, and when rained the area flooded.This began to be modified by successive deforestation, river dredging and the development of pastures.The climate of Vassouras, according to the classification of Köppen, is Aw (tropical wet).By Holdrigde system, this location is in the area of tropical rainforest, tropical forest with deciduous and mesophytic with high proportion of evergreen species. According to the analysis of water balance in the region (Table 5 and 6), it is observed that the location has a dry period that corresponds to the months from May to September, where it appears to occur much higher temperature than precipitation.The water surplus period is observed in the months January to April, and then from October to December period of the summer.The average total annual rainfall and average temperature for locality are, respectively, 1264.7 mm and 25.6 ° C.There is an annual water deficit of 30 mm and a surplus of 372 mm. Photoindication plant samples were obtained from ruderal phytofragment from the area of study and analysis of phytosociological and physic-ecological factors was done by taking two periods based on water balance (water deficiency and exceedance), to verify this influence on the development of these plants (Tables 1 , 2, 3 and 4).The dry matter accumulation increased with leaf age.The leaf area depends mainly on the level of moisture in the soil at the time of its expansion.The area increased in all cases was larger and more significant in the period without water deficit.The leaf expansion showed exponential growth, markedly influenced by water stress, which limited its cell elongation.The leaves of these ruderal in the early days of growth are directly dependent on a supply of water and nutrients photosimilated.The rapid stretching for a short period of time requires a large supply of these components. This seems to explain the rapid senescence of leaves in previous release, which will act as a source of assimilated to the new.(1978) suggested that the IP can be used to test the effects of environmental stresses on plant growth.The low wf Ψ morphological parameters changed in ruderal plants, including the size and the total number of sheets in sprouting.The reference value (R) was determined as 30 mm, based on observations of the formation of internal tissues of the leaf and its growth curve.It appears that the duration of the plastochron (p / r) was 0.61 per day and the rate of leaf appearance at the launch (r / p) of 0.83 day-1 for water surplus in plants under water deficits found -se p / r and r = 0.12 per day and r / p = 0.89 day-1. On the Table 5 presents the monthly average temperature (0 °), specific humidity of air (g / kg) saturation deficit (g / kg), wind speed (m / s), solar radiation (W/m2) and net radiation (W/m2), and the total precipitation (mm).The total available energy is shown in Table 6, in mm water equivalent.These values are embedded in the variations of energy storage within the ruderal fragment, as explained in the plots equation ∂ A / ∂ t.In determining these plots was assumed that the temporal variations of temperature and pressure of water vapor, the phytofragment noted above represent the pattern of variation along the column that extends from the ground to the reference height Zr (THOM & OLIVER , 1977). This consideration allows an immediate solution to the first two integrals of the equation ∂ A / ∂ t, with respect to the third integral of that equation, which represents the variation of the energy storage of the mass of vegetation.It was assumed that this term followed the pattern of variation in energy storage in the air in the form of sensible heat.With respect to the flow of heat into the ground, given the low levels of radiation reaching the ground (about 3% of the radiation that reaches the top of the phytofragment) its contribution was neglected.The fall in the net radiation for rainy days and chosen days and as a general rule throughout the observation period was counterbalanced by the flow of sensible heat, and by the negative rates of change in energy storage within the phytofragment. The variation of the energy stored inside the ruderal fragment during the occurrence of rainfall is along with the sensible heat flow for this phytofragment, one of the mechanisms that compensate to reduce the contribution of net radiation, the composition of total energy available to evaporation (Table 6).In the study area it was found that the ruderal plants adapt better to lands where surplus water occurs; the growth and development has increased in the summer. being explored by varied forms and vegetal species, with genetic different light requirements, but they are perfectly capable of adapting.The secondary radiation effects (heat, influences on the hydric balance) affect all the adaptations to the intensity of the local light.(TURNER & KRAGH, 1980; GARDNER ET AL, 1981, IDSO ET AL, 1981). ∂t) dz equation(1).The equation that describes the evaporation of a non saturated surface, according to the Penman -Monteith formulation (THOM & OLIVER, 1977), is: LeE = (∆R p + CP (es -e) / ra) / (∆ + γ (1 + rc / ra)) equation (2)The symbols of these equations represent: p = air density (kg / m3), pv = vegetation density (kg / m3), Cp = specific heat of air at constant pressure (Joules / (kg K)); cvg = specific heat vegetation (Joule / (kg K)) T = air temperature (K), Tv = vegetation temperature (K) and = saturation pressure of water vapor in the air (Newton/m2), and pressure = water vapor in the air (Newton / m2) ∆ = des / dT (Newton / m2 K)), R = energy available on the surface (Watt / m 2), γ = constant psychometric (Newton / (m2 K)) ; ra = aerodynamic resistance to water flow (s / m) and rc = surface resistance to heat flow (s / m) {ra = Ln ((Zr-d) / Zo) -Ψm} {Ln ((Zr -d) / ZV) -Ψv} / k2 u equation (3) where Zr = reference height where measurements are made (m), d = displacement height of zero plane (m) = Zo roughness length for momentum of vegetation (m) = ZV roughness length of vegetation to the flow of vapor (m), k = von Karman constant = 0.41 (dimensionless); Ψm, Ψv = corrections that must suffer the flows under the conditions of atmospheric stability in which they occur.One of the criteria for establishing measures of atmospheric stability is supported in the definition of the length of time stability of Monin -Obukhov, (L), whose expression is: L = u.P T 3 CP / kg (0.07 + H LeE), where friction velocity u.a in m / s, T the air temperature in K e g a gravity acceleration in m/s2.The differential equation that expresses the variation of wind speed with respect to height above a plane covered by vegetation or not, is ФM = (u.K (z -d)) ФM where ФM is a dimensionless function of stability for momentum, which takes values > 1 when the atmosphere is stable, <1 when is unstable and e = 1 when it is neutral.The integration of the equation ∂ u / ∂ z is dependent on knowledge of the link between the function of stability and height z.ФM have been experimentally established for different stability conditions, expressions that relate the height z and length stability of Monin -Obukhov.The equation that expresses this relationship to the unstable atmosphere is given by ФM = {1 -16 (Z -d) / L} -1 / 4. Assuming this relationship, the equation ∂ u / ∂ z can be integrated between levels (d + Zo) -Zr, and its solution by assigning x = {1 -16 (Zo / L} -1 / 4 and x = {1 -16 (Zr -d) / L} -1 / 4, results in the equation: u = u./ k {Ln ((Zr -d) / Zo) -ΨM}, in which ΨM1 Ln = {(1 + x2) (1 + x) 2 / (1 + xo2) (1 + x) 2} -arc tan 2 x + 2 arc tan x.An analogous procedure with respect to the profiles of vapor pressure and temperature in their differential forms, assuming ФM = Фv = {1 -16 ( Z -d ) / L }-1/2, for an unstable atmosphere, and integrating between the levels (d + ZV) and Zr produces the integral function of stability, Ψv1 Ln = 2 {(1 + x2) / (1 + xo2)}.For stable atmosphere these functions have the forms in equations: ΨM2 = 5 (Z -d -Zo) / L and Ψv2 = 5 (Z -d -ZV) / L, in which it is assumed, as Webb (1979), which ФH = ФV1 = ФM2 = (1+5 (Z -d) / L) Of course the conditions of neutrality in the atmosphere correspond to ΨH = ΨV1 = ΨM2 = 0.The procedure for the determination of potential evaporation, suitable for a set of meteorological observations, must run in each iteration the following calculations: 1) a value for the stability length of Monin -Obukhov L (in the first iteration L = ∞), 2nd ) stability functions ΨM .Ψv given by equations ΨM1 and Ψv1 or ΨM2 and Ψv2 as stability conditions indicated by the last value of L that is being used, 3) the friction velocity u by the equation U, 4) the aerodynamic resistance ra through the equation ra; 5) the potential evaporation from the equation LeE (where rc = 0), 6) the sensible heat flow H, as unfamiliar term in the equation Rn -LeE -H -G = ∂ a / ∂ t, 7º) a new stability length L with the use of the corresponding equation.Finally verify the corrections done by the variables E, u. , H e L. If they are small the process is closed, otherwise a new iteration must be performed.Upon completion of the two iterations, lower limits are defined (Li Ls) and higher (Li Ls), among which is the solution sought.In subsequent iterations, aiming an acceleration of the process, the value assumed in the first L no longer corresponds to the last value found in the seventh item, but the midpoint of the range Li -Ls, designated by Lo.Subsequently the determination to a new value for L, from Lo, it is possible to reduce the interval Li -Ls, making one of the limits assume the value of Lo now, in correspondence to the following situations: when the new value of L > Lo then Li = Lo, when L < Lo then Ls = Lo, after this reduction the new center point is calculated, and other operations of the procedure are performed. absolute frequency (FA), relative frequency (FR), relative dominance (DoR), importance value index (IVI) and relative importance (RI).The equations adopted were: leaf area index (IAF), which expresses the leaf area (AF) per unit of soil constant (S) established based on the spacing and number of plants: IAF = AF/S.0.5.The leaf area ratio (RAF) is related to leaf area (LA) with a total weight of the plant (Pt): RAF = AF/Pt.The absolute growth rate (TCA), which expresses the plant growth by weight (Ptn -Ptn -1) between two consecutive samples (Tn -Tn -1) where: TCA = ((Ptn -Ptn-1)) / (Tn -TTN-1)) g day-1.The rate of dry matter production (TPMS) calculated by TPMS = ((Ptn -Ptn-1)) / S. (Tn -TTN-1)) g.Dm2 day -1.The relative growth rate (RGR), which represents the increase in weight per unit weight at the beginning of the period, calculated per day.TCR = ((InPtn -InPtn-1 / (Tn -T n-1)) gg-1.day -1, where In = natural logarithm for a period of growth (Tn -Tn -1).The net assimilation rate ( TAL) that represents the plant's ability to increase the weight in terms of surface area per day, in which AS = ((Ptn -Ptn -1) / (AFN -AFN -1) (InAFn -InAFn -1) (Tn -Tn -1)).g. dm-2.day -1 relative density (DR): DR = Ne / Nt. 100 (%), where Ne is the number of individuals of a species found in samples and Nt is the total number of individuals sampled.The absolute frequency (FA): FA = NAE / Nat. 100 (%), where NAE is the number of samples that occurred a certain species and Nat is the total number of samples done.The relative frequency (RF): RF = SAF / FAT. 100 (%), where FAE refers to the absolute frequency of a given species and FAT is the sum of the absolute frequencies of all species of the community.Relative dominance = DoR MSe = / MST. 100 (%), where MSe refers to the dry weight of dry matter accumulated by the whole community.The importance value index (IVI): IVI = DR + FR + DoR, and the relative importance (IR): IR = Ivie / IVIt x 100 (%), where IVI refers to the importance value index of a particular IVIt population and is the sum of the indices of importance value of all the community's population.Samples were collected to determine the class of leaflet-petiole angle (LBC), the thickness of the leaf tissue, plastochron index (IP), the dry matter content, the length of the budding and the central leaflet, weight and leaf area and the number of leaves.The plastochron index (IP), time interval between the initiation of two successive leaves, and leaf plastochron index (IPF) were defined by Erickson & Michelini (1957).The length of the leaf emerging at the apex is used as scale morphology, and plants have no plastochron age when the length of the sheet serial n equals the reference value (R).The IP and IPF were determined by the formulas: IP = n + log Ln -log R / log Ln -(log Ln + 1), IPF = IP -a.If n the number of leaves is greater than the reference length (R), Ln the length of the sheet (which by definition is greater than or equal to R) Ln + 1 the length of the leaf N + 1, which by definition is less than R, and a the serial number of any leaf along the stem.Being IP a linear function of time, it is possible the following equation: IP = log Lo -log R / P + (r / P. T), where P is the rate of elongation plastochron (log Ln -log Ln + 1) , P / r is the duration of plastochron and r / P is the rate of leaf appearance. Table 4 : Physic-ecological factors during water exceedance A cause that changes the use of LBC in natural conditions is environmental factors such as temperature, drought and nutritional status of the plant.Lamoreaux et al.
2018-12-21T19:15:14.724Z
2012-10-20T00:00:00.000
{ "year": 2012, "sha1": "e53affa35c0cbc7438bbee6acca2a664488dc331", "oa_license": "CCBY", "oa_url": "https://www.e-publicacoes.uerj.br/index.php/ric/article/download/4135/2970", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e53affa35c0cbc7438bbee6acca2a664488dc331", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
256179548
pes2o/s2orc
v3-fos-license
Evaluating clinical effectiveness and impact of anti-pneumococcal vaccination in adults after universal childhood PCV13 implementation in Catalonia, 2017–2018 Highlights • Benefits from pneumococcal vaccination in adults are uncertain at present.• In Catalonia, in 2017–18, pneumococcal disease burden in adults remains considerable.• Clinical effectiveness of PCV13/PPsV23 has not emerged after free PCV13 for children. Introduction To date, despite successive formulations of anti-pneumococcal vaccines for adults and children during the past decades, infections caused by Streptococcus pneumoniae, mainly invasive pneumococcal disease (IPD) and pneumococcal pneumonia (PP) remain a major public health problem worldwide. Pneumococcal disease has a bimodal distribution, with a high burden among children under 5 years and adults over 50 years. In 2017, as well as 9,600 deaths by pneumococcal meningitis, pneumococcus caused approximately 659,000 low-respiratory infection deaths among people over 50 around the world. [1]. The existence of more than 90 pneumococcal serotypes (differing in their immunogenicity, virulence and epidemiological distribution in distinct geographical areas), [2] together with the observed phenomenon of serotype replacement after the introduc-tion of conjugate vaccines, has largely complicated the development of a fully effective anti-pneumococcal vaccination strategy. [1,3]. At present, apart from childhood immunisation, two antipneumococcal vaccines have been available for use in adults: the 23-valent pneumococcal polysaccharide vaccine (PPSV23, classically recommended for high-risk individuals and elderly people since the 1990s) and the 13-valent protein-polysaccharide conjugate vaccine (PCV13, initially marketed to replace PCV7 for childhood immunisation in 2010 and licensed for use in adults since 2012). [3] A major advantage for the PCV13 would be its theoretical higher immunogenicity, but, it has a lower serotype coverage than the PPSV23. [4]. In Catalonia, a region in Northeastern Spain with 7.6 million people, the PPSV23 has been recommended (and publicly funded) for all persons over 60 years (with or without risk conditions) and individuals under 60 years with at-risk conditions since 1999. The PCV13 (available for use in adults since 2012) is recommended for the same adult target population, although it is publicly funded only for high-risk individuals (basically immunocompromised patients). In children, a publicly funded vaccination programme offering a free PCV13 for all infants 2 years began in late 2016; before this date, the PCV7/PCV13 had been also used in children (without public funding, except for immunocompromised children). [16] PPSV23 and PCV13 effectiveness among adults was evaluated by our research team in a prior study performed during 2015-2016. [10] The present study is aimed to update information about the clinical effectiveness of adult vaccination after free PCV13 pediatric approval. Concretely, we assessed PPSV23 and PCV13 effectiveness in preventing hospitalised pneumonia (pneumococcal and all-cause), death from pneumonia and death from any cause among Catalonian people over 50 years-old throughout the 2017-2018 period (early 2-year period post-PCV13 free implementation in infants). Design, setting and study population This is a closed population-based retrospective cohort study involving 2,059,645 Catalonian middle-aged and older adults followed between 01/01/2017-31/12/2018. Cohort members were all persons 50 years or older (birth date before 01/01/1967) who were assigned to the 274 Primary Health Care Centers (PHCCs) managed by the Catalonian Health Institute (ICS, Institut Catala de la Salut) on January 1, 2017 (date of study start). In the study region (Catalonia, Spain) there are 358 PCCs, of which 274 (76.5%) are managed by the ICS and 84 are managed by other providers. The analysed cohort (n = 2,059,645 persons ! 50 years) represented a 72.6% of the total 2,838,002 Catalonian inhabitants in this age strata according to census data in January 2017. [17] At the start of the study, antipneumococcal vaccination uptakes among at-risk/older adults were approximately 60% for the PPSV23 and 1% for the PCV13 (48% in children). [18,19]. For this report, cohort members were followed since the beginning of the study until the occurrence of any event, disenrollment from the PHCC, death, or until the end of the two-year follow-up (31/12/2018). The study was approved by the ethical committee of the Institution (ethic committee IDIAP Jordi Gol, file 20/065-PCV) and was conducted in accordance with the general principles for observational studies. [20] Considering population-based and non-interventional design, an individual consent for all study participants (n = 2,059,645) was exempt. Data analyses were anonymised and risk of identification was null. Data sources To establish baseline characteristics (vaccinations, comorbidities and underlying risk conditions) of the cohort at study start, as well as to identify vaccinations after study started, we used the information system for the development of research in primary care of Catalonia (SIDIAP), [21] which compiles administrative data and clinical information contained in the ICS's electronic PHCC medical records system and is usually used for epidemiological studies in the region. Quality criteria for clinical data of the SIDIAP research database has been reported elsewhere. [22]. To identify study events (hospitalisations from pneumococcal and all-cause pneumonia) that occurred in the study population throughout the study period, we used the national surveillance system for hospital discharge data (CMBD, Conjunto Mínimo Básico de Datos), which is maintained by the Spanish Ministry of Health and covers an estimated 99.5% of the total Spanish population (covered in the National Health Care System by compulsory health insurance). [23] In the present study we used CMBD hospital discharge codes, coded according to the International Classification of Diseases, 10th Revision (ICD-10), reported during 2017-2018 from 68 Catalonian hospitals. Outcomes Pneumococcal pneumonia (ICD-10 code J13), pneumonia by other microorganisms (codes J11.0, J12, J14-J17) and pneumonia by unidentified/unspecified microorganism (code J18) were defined on the basis of hospital discharge ICD-10 codes (any listed position) reported by the CMBD in hospitalisations occurring among cohort members from 01/01/2017 to 31/12/2018. Bacteremic PP was considered in those cases with ICD-10 code J13 plus A40.3 or B95.3. Death from any cause was considered according to administrative data (vital status), which is periodically updated in the SIDIAP data base. Death from pneumonia (casefatality) was considered when the patient died within 30-day after pneumonia diagnosis. Vaccination status PPSV23 and PCV13 vaccination status were determined according to data registered in the PHCCs' electronic clinical records which contain specially designated fields for pneumococcal and influenza vaccinations (virtually all of them are administered at the PHCCs in the Spanish Health System). At baseline, cohort members were classified as PCV13 and/or PPSV23 vaccinated if they had received at least one dose of the vaccine before the study started. Throughout the study period, PCV13/PPSV23 vaccination status was a time-varying condition since some individuals received the vaccine after the study started. Subjects were considered to be vaccinated 14 days after vaccine administration. A subject was considered as unvaccinated when a vaccination was not recorded. Covariates Baseline covariates were age, sex, influenza vaccination in prior autumn, history of hospitalisation for pneumococcal disease or pneumonia in the previous 2 years, history of chronic respiratory disease, chronic heart disease, chronic liver disease, diabetes mellitus, current smoking, alcoholism, and immunological situation. Immunocompromise was defined by the presence of any one of the following: asplenia, immunodeficiency/HIV-infection, severe chronic renal disease, bone marrow transplantation, cancer (solid organ or haematological neoplasia) and/or immunosuppressive medication. Definitions used to identify comorbidities/underlying conditions were listed in an appendix elsewhere. [10]. Statistical analyses Incidence rates (IRs) were calculated as person-years, considering that the numerator was the number of events and the denominator was the sum of the persons-time contributed to each cohort member during the study period. Only a first episode of hospitalisation from pneumonia during the study period was considered and, therefore, the analyses do not include multiple events per person. To compare baseline characteristics of study subjects according to their PPSV23 and PCV13 vaccination status we used Chisquared or Fisher's test as appropriate. Cox regression models for time-varying covariables were used to estimate hazard ratios (HRs) and evaluate the association between having received PPSV23/PCV13 and the time of the first outcome during the study period. The PPSV23 and the PCV13 vaccination status were considered time-varying conditions (e.g., subjects vaccinated after study started), whereas the other covariables were defined at study entry. All above mentioned covariables (i.e., age, sex, vaccinations' history, and presence of comorbidities/underlying risk conditions) were initially considered potential candidates for the calculation of multivariable Cox models. The method to select a subset of covariates to include in the final models was the purposeful selection. [24] The proportional hazard assumptions were assessed by adding the covariate by log-time interactions to the model. Both PCV13/ PPSV23, influenza vaccination status, history of prior pneumococcal disease and immunocompromising conditions were considered as epidemiologically relevant covariates, being included in all the final models. All models were compared by the partial likelihood ratio test and Akaike information criterion. Besides the main analysis including the total study cohort, we performed supplementary analyses by age subgroups, immunological status and four specific at-risk conditions (patients with chronic respiratory disease, chronic heart disease, diabetes mellitus and smokers). All results were expressed with 95% confidence intervals (CIs). Statistical significance was set at p < 0.05 (two-tailed). The analyses were performed using IBM SPSS Statistics for Windows, version 24 (IBM Corp., Armonk, N.Y., USA). Results The 2,059,645 cohort members were observed for a total of 3,958,528 person-years, of which 1,532,186 were PPSV23 vacci-nated and 33,228 were PCV13 vaccinated. Considering the PPSV23 status, 798,548 individuals had received PPSV23 before study start (contributing to the analyses with 1,500,833 person-years as PPSV23 vaccinated) and 41,745 individuals received PPSV23 later (contributing to the analyses with 51,392 person-years as PPSV23 unvaccinated and 31,353 person-years as PPSV23 vaccinated). As for the PCV13 status, 13,917 individuals had received PCV13 before study start (contributing to the analyses with 25,565 person-years as PCV13 vaccinated) and 7,769 individuals received PCV13 later (contributing to the analyses with 7,419 person-years as PCV13 unvaccinated and 7,664 person-years as PCV13 vaccinated). The vast majority (81.1%) of PCV13 vaccinated subjects had dual vaccination (PCV13 + PPSV23). At baseline, mean age of cohort members was 66 years-old (standard deviation: 11.4), 951,011 (46.2%) men and 1,108,634 (53.8%) women. Considering comorbidities/underlying risk conditions, 351,287 (17.1%) cohort members had diabetes mellitus, 344,540 (16.7%) were smokers, 219,682 (10.7%) had chronic heart Table 2 Incidence and risk of hospitalisation from pneumococcal and all-cause pneumonia in relation to PCV13 and PPSV23 vaccination status in the total study population (N = 2,059,645). Table 2 shows number of events, IRs, unadjusted, age and sexadjusted and multivariable-adjusted risks of pneumococcal, other microorganisms, unknown ethiology and all-cause pneumonia in relation with PCV13 and PPSV23 vaccination status in the total study cohort. In the unadjusted analyses, as well as in the age and sex-adjusted analyses, both PCV13 and PPSV23 were associated with an increased risk for all analysed outcomes. In the multivariable analyses, having received the PPSV23 did not significantly alter the risk of overall pneumococcal pneumonia (HR: 1.07; 95% CI: 0.98-1.18; p = 0.153) and slightly increased the risk of all-cause pneumonia (HR: 1.14; 95% CI: 1.10-1.18; p < 0.001). Considering the PCV13, it appeared significantly associ- Table 3 Incidence and risk of death from pneumonia and death from any cause in relation to PCV13 and PPSV23 vaccination status in the total study population (N = 2,059,645). = 0.004). Considering all-cause death, mortality rate was 2042 deaths per 100,000 person-years in the total study cohort, 4795 per 100,000 in PCV13 vaccinated and 3725 per 100,000 in PPSV23 vaccinated. After multivariableadjustements, pneumococcal vaccination (neither PCV13 nor PPSV23) did not significantly alter the risk of death from pneumonia (pneumococcal and/or all-cause) while PPSV23 was associated with a little reduction risk of all-cause death (HR: 0.97; 95% CI: 0.95-0.99; p = 0.002) ( Table 3). Parameter Supplementary analyses focused on age subgroups (under/over 60 years), immunocompetent/immunocompromised subjects and specific at-risk comorbidity subgroups are shown in Table 4 (for the PPSV23) and Table 5 (for the PCV13). The PPSV23 was associated with a marginally significant reduction risk of bacteremic pneumococcal pneumonia in the age subgroup >=60 years (HR: 0.74; 95% CI: 0.56-0.97; p = 0.031) and immunocompetent persons (HR: 0.72; 95% CI: 0.53-0.98; p = 0.035), but no benefits emerged against all pneumococcal pneumonia and/or all-cause pneumonia. Considering the PCV13, clinical benefits did not emerge either. both PCV13 and PPSV23 were associated with an increased multivariable-adjusted risk of all-cause pneumonia in all analysed population subgroups (with HRs ranging between 1.07 and 1.49 for the PPSV23 and 1.11-1.41 for the PCV13). Discussion At present, after PCV13 introduction for infants, benefits from using PCV13 (and also PPSV23) in adults are uncertain. [13][14][15] The present study assessed clinical effectiveness and public health impact of PCV13/PPSV23 vaccinations in the general adult population over 50 years in Catalonia, Spain, throughout 2017-2018 (early two-year period after free PCV13 approval for infants). Data provided here updates data reported during 2015-2016 (before free PCV13 approval). [10]. As its main findings, pneumococcal vaccination has not emerged effective (neither PCV13 nor PPSV23) in preventing hospitalised pneumonia (pneumococcal or all-cause) and death from pneumonia in the total study cohort. In stratified analyses, the PPSV23 was associated with a marginally significant reduction risk of IPD (bacteremic PP) in the age subgroup over 60 years (where vaccine is recommended) and in immunocompetent persons. However, pneumococcal vaccination did not emerge effective (neither PPSV23 nor PCV13) against all pneumococcal pneumonia and/or all-cause pneumonia in specific at-risk population subgroups where vaccination is also recommended (i.e, immunocompromised subjects, chronic respiratory disease, heart disease or diabetes). Considering pneumococcal pneumonia, our results are essentially similar to data observed during the 2015-2016 period and supports that vaccinating adults has not public health impact in reducing hospitalised pneumococcal pneumonia (all serotypes) in Table 4 Stratified analyses on PPSV23 vaccination effectiveness according to age subgroups, immunological situation and distinct at-risk conditions. the current era of universal PCV13 infants' vaccination in our setting. In addition, data alerts about a posible increasing risk of allcause pneumonia (non-vaccine pneumococcal serotypes, other microorganisms and/or unknown ethiology) among vaccinated subjects. In this cohort study, crude IRs of hospitalisation from all-type pneumonia were largely higher in vaccinated than in nonvaccinated persons, reflecting the baseline excess risk of vaccinated subjects who were older and had more comorbidities/risk conditions than unvaccinated (especially for the PCV13). We try to resolve these baseline differences between vaccinated and unvaccinated subjects by multivariable analysis but, even after age/sex and underlying risk condition's adjustments, both PCV13 and PPSV23 remained significantly associated with an increased risk of allcause pneumonia (both in the total study cohort as well as in stratified subgroups). Opposite other recent studies in USA elderly people, [25,26] population benefits of PCV13 vaccination against all-type pneumonia have not emerged in the present study. We note that no adjustment method fully resolves confounding by indication in observational studies. [27] Regarding PCV13 (with very small vaccine coverage in our cohort since it was only funded for immunocompromised patients), caution is needed to interpret vaccines' effectiveness estimates (multivariable HRs) because it could simply reflect that the immunocompromised PCV13 vaccinated population had an enormously increased a priori risk of pneumococcal disease outcomes, and that PCV13 could not overcome this excess risk. A lower risk for death from any cause was observed in certain analysis, such as the fully adjusted overall cohort and the immunocompetent adults (also oserved for death from all cause pneumonia). This data could simply reflect baseline risk differences between vaccinated and unvaccinated subjects, But It could also be related with the fact that pneumococcal pneumonia increases the risk for subsequent cardiac events and all-cause mortality. [28]. In this concern, there is insufficient evidence to draw conclusions on the impact of PCV13 or PPSV23 on mortality among older adults. [1]. During the past decades, likely related to differences in vaccination practices and smoking, the role of pneumococcus as a causative microorganism in pneumonia cases in developed countries has clearly declined (representing currently less than 10-15% of overall pneumonia cases in North America and Europe). [29,30] Therefore, the potential value of pneumococcal vaccination in reducing all-cause pneumonia has also decreased, and it would be relatively low at present. On this concern, pneumococcal pneumonia represented 14.9% (3592/24,136) of overall pneumonia cases in the present study, supporting that the role of pneumococcus as causative pathogen of pneumonia has also declined in recent years in our setting (where pneumococcus represented near 40% of community-acquired pneumonia cases at the beginning of the present century). [31]. In the present study, data about pneumococcal serotypes was not available (since it is not reflected in the Spanish CMBD system) and, as consequent limitation, this study is not able to assess vaccination effectiveness against vaccine-type pneumococcal pneumonia (which would be the most specific outcome evaluating vaccines efficacy). It must be noted that the value of PCV13/PPSV23 vaccinating adults against all pneumococcal pneumonia (any serotype) has also decreased in recent years because indirect effects from routine PCV13 pediatric use. [32][33][34][35][36][37] In Spain, data from the National Surveillance Microbiology Laboratory of Pneumococcus has revealed that only a 29.7% of overall IPD cases (mostly bacteremic pneumococcal pneumonia) were caused by PCV13 serotypes during the 2017-2018 period. [33] In Catalonia, during the same period, PCV13 and PPSV23 serotypes represented 30.4% and 70.5%, respectively, of overall IPD cases (they were 40.4% and 73.6%during 2014-2015). [34]. In European PCV13 sites, several years after vaccine introduction, the incidence of IPD caused by PCV13 serotypes declined substantially in older adults (as indirect effect from childhood PCV13 vaccination) while the incidence of non-PCV13 serotypes increased 63%, resulting in a non-significant 9% reduction (95% CI: À4% to 19%) of IPD caused by all serotypes. [35] In the United States, where PCV13/PPSV23 uptakes among children and older adults are relatively high, only 4.6% of overall pneumonia cases hospitalised between October 2013 and September 2016 in people aged 18 years or more were caused by PCV13 serotypes (3.8-5.3% in patients aged 18-64 years depending on their risk status and 4.2% in people aged 65 years or older). [36]. The new generation of higher-valent PCVs (PCV15, PCV20 and future PCV24) have the potential to reduce a substantial proportion of pneumococcal disease cases among all age groups. However, considering the repeated serotype-replacement phenomenon observed after each PCV introduction, a new technology pneumococcal vaccine (with complete protection regardless of serotype) remains desirable. Major strengths in this study are the largest size of the study cohort (which included more than 2 million adults over 50 years and represented almost 73% of the total Catalonian inhabitants in this age stratum), as well as the use of multivariable survival analysis methods to estimate accurately PCV13/PPSV23 effectiveness adjusted by major known underlying risk conditions. To our knowledge, this is the largest cohort study evaluating pneumococcal vaccination effectiveness in adults performed to date. Importantly, the study provides scarce population-based incidence data and it has been able to assess vaccination effectiveness against public health relevant outcomes (such as hospitalised all-type pneumococcal pneumonia, all-cause pneumonia and death from pneumonia) in distinct at-risk population subgroups of interest (e.g., chronic respiratory or cardiac disease, diabetics and smokers) where PPSV23/PCV13 effectiveness data is relatively rare in the literature. [38]. Major limitations in this study are related with its observational nature; mainly, the non-randomised vaccination, the small PCV13 coverage and the absence of serotype data. We assumed that hospital discharge coding was correct, but a validation diagnosis was not feasible considering study design and enormous size of the cohort. We did not consider time since vaccination in the analyses because of the vast majority of PPSV23 vaccinated had received the vaccine more than 5 years ago (since revaccination is not routinely recommended in aged > 65 years) and all PCV13 vaccinated had received the vaccine in previous 5 years (within 2012-2016). We note that definition criteria of pneumococcal pneumonia vary between distinct studies, but we also note that the use of ICD diagnoses codes to define pneumococcal pneumonia, despite recognised limitations, [39] has been commonly used in many studies evaluating this issue. On this concern, we underline that all participating Catalonian hospitals basically apply similar diagnoses checklist and treatment for patients with a clinical suspicion of pneumonia (which is established on the basis of an acute respiratory illness, with evidence of a new infiltrate in a chest radiograph), being blood/sputum cultures and urinary antigen testing used as conventional diagnostic workup performed according to the attending physician. [40] As real world data study, vaccinations in this study were more frequently prescribed for persons with comorbidities/underlying risk conditions and, as above noted, may exists residual confounding in vaccines' effectiveness estimations despite multivariable adjustements. We recognise these limitations but, opposite to vaccine's efficacy that must be assessed by trials with controlled conditions, vaccination's effectiveness must be assessed by observational studies conducted in the real-world practice conditions, as in the present study, where vaccines are not homogeneously used. Caution is needed in extrapolating data to other geographical settings with distinct epidemiological conditions (i.e., different vaccines' use, vaccination uptakes among children and adults, prevalence of circulating serotypes, etc). In conclusion, in this real world data study involving 2,059,645 Catalonian adults ! 50 years followed throughout 2017-2018, apart of a protective effect against IPD, clinical benefits of adults vaccination in preventing hospitalised all-type pneumococcal pneumonia or all-cause pneumonia have not emerged, neither for the PCV13 nor for the PPSV23. Our data supports that the current anti-pneumococcal vaccination strategy is insufficient to reduce pneumonia burden in at-risk and older adults in our setting. At present, when PCV13 childhood immunisation is working and indirect effects have occurred (by reducing circulating PCV13 strains in the population), the potential value vaccinating adults with the PCV13 (but also with the PPSV23 which shares twelve common serotypes) has clearly decreased. Vaccinating adults using PCV13/PPSV23 could still provide clinical benefits in those settings where vaccine-type disease burden remains relatively high, but changes in adults vaccination strategies are necessary in settings where universal infants vaccination is working and, therefore, circulating vaccine-type serotypes are low. [1] In this way, CDC has recently approved new vaccinations' recommendations criteria for at-risk and older adults, supporting the use of the new generation of PCVs (PCV15 or better PCV20) in all age groups. [37]. Author contributions AVC and OOG conceptualized and designed the study; AVC, CDC and ESG wrote and edited the manuscript; CDC, VTV, MFP and DRS assessed outcomes; CRC obtained data; ESG and AVR did statistical analyses; AVC coordinated the study. All authors have read and agreed to the final version of the manuscript. Data availability Interested authors might obtain SIDIAP data (previous ethics and scientific approval by the ethics and clinical research committee of the Primary Care Research Institute Jordi Gol (IDIAP Jordi Gol)). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-01-24T16:06:08.728Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "86eb5caddcfbb022a53aab615c4c400c2ba07c3c", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "7c0394e4c5bb944e3b2b2c09523d21768f9ef60b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
37827521
pes2o/s2orc
v3-fos-license
Minimizing Time in Scheduling of Independent Tasks Using Distance-Based Pareto Genetic Algorithm Based on MapReduce Model Distributed Systems (DS) have a collection of heterogeneous computing resources to process user tasks. Task scheduling in DS has become prime research case, not only due of finding an optimal schedule, but also because of the time taken to find the optimal schedule. The users of Ds services are more attentive about time to complete their task. Several algorithms are implemented to find the optimal schedule. Evolutionary kind of algorithms is one of the best, but the time taken to find the optimal schedule is more. This paper presents a distance-based Pareto genetic algorithm (DPGA) with the Map Reduce model for scheduling independent tasks in a DS environment. In DS, most of the task scheduling problem is formulated as multi-objective optimization problem. This paper aims to develop the optimal schedules by minimizing makespan and flow time simultaneously. The algorithm is tested on a set of benchmark instances. MapReduce model is used to parallelize the execution of DPGA automatically. Experimental results show that DPGA with MapReduce model achieves a reduction in makespan, mean flow time and execution time by 12%, 14% and 13% than non-dominated sorting genetic algorithm (NSGA-II) with MapReduce model is also implemented in this paper. Introduction The Computational power of individual system is not sufficient for solving widely used complex computational tasks like high energy physics, earth science, etc.In order to solve complex jobs, high performance parallel and distributed systems are developed with a number of processors.As the computing nodes are heterogeneous in a multiprocessor environment, execution time of tasks varies on each processor.Scheduling of tasks is a key issue, to achieve the high usability of supercomputing capacity of distributed computing environment [1].To ensure efficient utilization of resources, suitable scheduling algorithms are used to assign the tasks to the available processors efficiently. For distributed computing environment, static scheduling can be used due to geographically distributed computing resources with various ownerships, access policy and different constraints.Schedulers are developed using complex arithmetic techniques that use the available values of application and environment.So, heuristic methods are the best approach to find the optimal schedule in the DS environment [2].The most important criteria used to analyze the efficiency of scheduling techniques are makespan and flowtime.Time taken to complete the last task is Makespan [3] and the sum of completion time of all tasks in a schedule is flowtime.The schedule, which optimizes the makespan and flowtime is called as optimum schedule [4].To minimize makespan, the Longest Job to be scheduled to Fastest Resource (LJFR) and for minimizing flowtime, the Shortest Job to be scheduled to Fastest Resource (LJFR) [2].Flowtime minimization makes the makespan maximization.This leads the problem as multiple objective. Genetic Algorithm (GA) is a search and population-based model [5].This has been extensively used in various problem domains.GA has the capability to search various regions of the solution space and note a diverse set of solutions for the distributed computing problem.GA uses genetic operators to improve the structure of good solutions in various objective spaces.These characteristics of GA used to find the best optimal schedule for multi-objective problem in distributed systems.The distance-based Pareto genetic algorithm (DPGA) of Osyczka [6] is used in this paper.DPGA uses a distance computation and dominance test procedure and elitist method of combining parent population with the offspring population for the next iteration.Set of most difficult static benchmark instances of Braun et al. [1] is used to analyze the performance of NSGA-II.The degree of task and resource heterogeneity can be captured by these instances.Hadoop MapReduce is a framework for distributed large scale data processing on computer clusters.MapReduce is used to automatically parallelize the data processing on clusters in a reliable and fault-tolerant manner [7]. MapReduce programming model is used to implement the multi-objective DPGA to find the optimal schedule with minimum time in the distributed computing environment.The DPGA with MapReduce model suits for distributed computing environments with various computing resources and scheduling is done by considering makespan and flowtime minimization.The DPGA with MapReduce model generates better solutions in minimal time than NSGA-II with MapReduce model. The remainder of the paper is structured as follows.Section 2 discusses the literature review.Multi-objective optimization introduction is presented in Section 3. Section 4 determines for identifying the variety of distributed computing environment in the simulation process.A Scheduling method using NSGA-II with MapReduce and DPGA with MapReduce is available in Section 5. Simulation results are shown in Section 6.Finally, Section 7 concludes and discusses the future work. Related Work Optimal scheduling of independent tasks to available resources in DS is an NP-complete problem and it depends on various heuristics and meta-heuristic algorithms.A few well known heuristic methods are min-max [8], suffrage [9], min-min, max-min [10] and LJFR-SJFR [11].These above heuristic methods are more time consuming process.In recent, several meta-heuristic methods are developed to solve complex computational problems.The most popular methods are GA [6], particle swarm optimization (PSO) [12], ant colony optimization (ACO) [13] and simulated annealing (SA) [14].The description of eleven heuristics and comparison on the various distributed environment was done by Braun et al. [1] and illustrates the effectiveness of GA with others.All the above meta-heuristic methods considered single objective and aimed to minimize the makespan. There are some methods considered multiple objectives, while scheduling tasks in distributed environments.Izakian et al. [15] compares five heuristics depends on the machine, and task characteristics for minimizing both makespan and flowtime, but calculated separately.Several nature inspired meta-heuristic methods like GA, ACO and SA for scheduling tasks in a grid computing environment by using single and multi-objective optimization was done by Abraham et al. [16].Xhafa et al. [17] implemented GA based scheduler.All the above methods convert the multi-objective optimization problem into a scalar cost function, which makes single objective before optimization. To minimize the amount of time to find the best optimal schedule Lim et al. [18] implemented PGA, Durillo et al. [19] implemented parallel execution of NSGA-II and these methods have the difficulties to make communication and synchronization between the resources in a distributed environment.This paper implements NSGA-II with MapReduce programming model and DPGA with MapReduce programming model to find the best optimal schedule.It makes the task scheduling as an efficient real time multi-objective optimization problem. Multi-Objective Optimization The Scheduling problem in a distributed environment needs to optimize the several objectives at the same time.In general, these objectives are contradictory with each other.These contradictory objective functions generates a set of optimal solutions.In the optimal solution set, not one solution is greater than each another solution with regarding all the objective functions.These optimal solution set is called as Pareto optimal solution.The multiobjective minimization problem is formulated as, where , , , n a a a a =  is the vector of decision variables, : is the objective functions and n S ⊂ ℜ is the suitable region in the decision space.A result a S ∈ is said to dominate another result b S ∈ , if the subsequent things are satisfied, a is said to be a Pareto optimal solution, where no solution dominates a S ∈ .The best solution of the Pareto op- timal set is called as a Pareto optimal front in multi objective problem space.Once the entire Pareto optimal solution is found, that is the indication of completing multi-objective problem [20]. Multi objective optimization is also known as vector optimization, as a vector of objectives is optimized instead of a single objective.When using multiple objectives, the search space is divided into two non overlapping regions known as optimal and non-optimal.The difference between single objective and multiple objective optimization are handling two search spaces and having two goals instead of single. Problem Statement Distributed environment has geographically distributed computing systems with complex combinations of hardware, software and network components.R is the set of m processing elements in distributed environment and T is the set of n tasks assigned to the processing elements.As scheduling is performed for independent tasks, there is no communication among the tasks and a task can be assigned to a processing element exclusively.The pre-emption of task is not allowed.As scheduling is performed statically, computing capacity and prior load of processing element and computational load of the task is estimated and the tasks are scheduled in batches, once allocated the tasks it cannot be migrated to another resource.Expected time to compute matrix (ETC) can be built by using these details.An ETC matrix is a p × q matrix, in which each position of the matrix, ETC [p] [q] illustrates the expected time to complete job p in resource q.The row of ETC matrix has the completion time of a job of each resource and each column specifies the estimated execution time of a resource for all the jobs.Hence, the proposed method is static, non-preemptive scheduling. In this paper, the objective considered is minimization of makespan and flow time.Makespan is the completion time and waiting time of a task in a processing element.Flow time is defined as the sum of completion time of all the tasks described as follows, where t F the completion time of task t, tasks stands for a set of all tasks, Sch is set of all possible schedules.Longest job has to be scheduled on fastest resource to minimize the makespan and for minimizing flow time, shortest job to be scheduled on fastest resources.This contradiction makes the problem as multi-objective. Elitist Non-Dominated Sorting Genetic Algorithm (NSGA-II) Initial populations of size N are generated randomly.Non-dominated sorting is performed on the population to classify it into a number of fronts.Crowded tournament selection is performed by assigning crowding distance.This is used to select a better ranked solution if they belong to different front or select the higher crowding distance solution if they belong to same front.Crossover and mutation are performed on the generated parent solution and produce the offspring with size N. Single point crossover and swap mutation is used.The parent and offspring population of size N are combined and produce 2N population.Update the population by the individuals from lower front to size n.The individual has small crowding distance will be dropped in the tie.Precede again, except the first step till meets the stopping criteria [21].Figure 1 shows the workflow of NSGA-II with MapReduce model.Non-dominated sorting: It is used to find the individuals to the next iteration by classifying the population.The procedure is given below [22]. Step 1: For individual solution p in population N. Step 2: For individual solution q in population N. Step 3: If p and q are not equal, Compare p and q for all the objectives. Step 4: For any p, q is dominated by p, mark solution q as dominated.First non-dominated set is formed from unmarked solutions. Step 5: Repeat the procedure till the entire population is divided into fronts.Selection: The Crowded tournament selection operator is used.An individual i win the tournament with another individual j, if one of the following is true [22]. 2. The individual i and j have the same rank (ranki = rankj), then the individual i has better crowding distance (in less crowded areas, i.e., di = dj) than individual j. Crowding distance calculation: To break the tie between the individuals are having the same rank crowding distance is used [22].The procedure is as follows, Step 1: Initialize the number of individuals (x) in the front (Fa). Step 2: Set the crowding distance Step 3: Sort the individuals (x) in front (Fa) based on the objective function (obj).obj = 1, 2, , m  .m is the number of objectives and S = sort (Fa, obj). Step 4: Set the distance of boundary individuals as S(d 1 ) = ∞ and S(d x ) = ∞. Step 5: Set k = 2 to (x-1) and calculate S(d 2 ) = S(d x-1 ) as follows ( ) m S k f is the k th individual value in S for m th objective function. Distance-Based Pareto Genetic Algorithm (DPGA) At first distance-based fitness assignment was proposed by Osyczka and Kundu [6], the idea giving importance to both Pareto optimal front and the diversity of that front in one fitness measure.DPGA uses a distance computation and dominance test procedure with complexity O (kη2) [4].DPGA maintains a standard GA population and Elite population.The genetic operations like selection, crossover and mutation are performed on GA population.All the non-dominated solutions are maintained in elite population.The steps are as follows [23]: Step 1: Initialize the random population of size N (P0) Step 2: Calculate the fitness for the first solution (F1) and set generation counter c = 0. Step 3: If c = 0, insert the first element of P0 in the elite set E0 and perform the distance calculation, minimum distance, index of elite member, elite population updating for each member of the population (k≥2) and k≥1 for c > 0. 3a. Distance Calculation Distance calculation is to find the distance between population member and elite member in the objective space [23].The formula for calculating distance is given below: e m is the fitness of elite member, f m is the fitness of population member for particular objectives.3b.Minimum Distance Calculation Find the minimum distance of a population member compared to all the member of an elite set [23].It is calculated as follows, E is an elite set, d k is the calculated distance. 3c. Elite member index The Elite member index is used to find which member of elite set has near to the member of the population [23]. { } * min d is the calculated distance and min k d is the minimum distance.3d.Fitness Calculation and Elite set update The elite set is updated depends on the domination of population member [23].Population member (k) is Otherwise ( ) and the population member is included by eliminating all dominated elite members in elite set. Step 4: Find the minimum fitness value among all the members of elite population and all elite solutions are assigned fitness F min . 4a. Minimum fitness Minimum fitness is used to find minimum fitness value among all the members of elite population and all elite solutions are assigned a fitness Fmin [23] is, E is elite set; F is the fitness of elite solution. Step 5: Stop c + 1 = c max or termination criterion is satisfied.Otherwise, go to step 6. Step 6: Generate new population (Pc+1) by using selection, crossover and mutation on Pc.Set c = c + 1 and go to step 3. There is no genetic operations like reproduction, crossover and mutation is performed on elite population Ec explicitly.To generate the new population selection, crossover and mutation operators are used.In this paper, tournament selection, single point crossover and swap mutation are used.Figure 2 shows the workflow of DPGA with MapReduce model. Hadoop: Hadoop is an open source framework that provides reliable, scalable, distributed processing and storage on large clusters of inexpensive servers [24].Hadoop is written in Java and users can customize the code to parallelize the data process in clusters which contains thousands of commodity servers.The response time The Apache Hadoop framework consists of Hadoop kernel, MapReduce, Hadoop Distributed File System (HDFS) and some related projects like HBase, Hive and Zookeeper.At present, Hadoop plays a major role in Email spam detection, search engines, genome manipulation in life sciences, advertising, prediction in financial service and analysis of log files.Linux and Windows are a preferred operating system for Hadoop.HDFS: A file system component of Hadoop is called as HDFS.Distributed low-cost hardware is used to store data in HDFS.HDFS contains name node and data node.A name node has Meta data information.If there is a request to read a data from HDFS, the name node provides the location of data blocks.The name node also has overall system information.So the name node is called as master of HDFS.The secondary name node has the replication of Meta data.At first, data node has to be registered in name node and gets namespace ID.For every particular period of time, the data node updates its status to name node.HDFS splits the large file into blocks and stored it in the different data nodes.Each block is replicated at the nodes of Hadoop cluster.At the time of failure data is re-replicated by the active Hadoop monitoring system [25]. MapReduce programming: It is a distributed parallel processing of large volume of data in a cluster with fault tolerant and reliable manner.MapReduce has job tracker and task tracker.Job tracker splits the job into tasks and schedules it to task tracker.The job tracker monitors the progress of task tracker.It is also responsible for re-executing the failed tasks.The map phase splits a user program into sub tasks and generates a set of key-value pairs.It will be submitted to reduce after a shuffle.The reduce phase performs user supplied reduce function on same key values to generate single entity.The reduce phase is also called as merge phase.The workflow of MapReduce is presented in Figure 3.The MapReduce function is represented as, map:: (input _ record) => list (key, values).reduce:: (key, list (values) => key, aggregate (values). NSGA-II with MapReduce The fitness evaluation of offspring alone has done parallel in the workers available in the map phase.Non-dominated sorting, crowded tournament selection is performed on an entire population.So it cannot be fit into concurrent process.1) Initial population is loaded into coordinator.2) Coordinator evaluates the fitness value; perform non-dominated sorting, crowded tournament selection, crossover and mutation.The offspring generated by coordinator sends to job tracker. 3) The job tracker splits the offspring population and send to workers of map phase to evaluate fitness value in parallel manner and send it to reduce phase.4) Shuffle operation is performed between map and reduce phase.5) The workers of reduce phase aggregate the fitness value and send it to the coordinator. DPGA with MapReduce This approach can parallelize the distance calculation, minimum distance calculation, elite member index and fitness calculation.The evaluation of individuals is executed parallel, because the fitness calculation is independent from others in a population.1) The initial population is seeded into the coordinator 2) The coordinator produces the offspring population 3) The job tracker divides the offspring population into sub populations and assigns to workers in the map phase 4) The workers perform the distance calculation, minimum distance calculation, elite member index and fitness evaluation for assigned individuals concurrently 5) The workers of reduce phase collect the fitness values and perform merge operation then send it back to coordinator for the next generation. Simulation Results and Discussion The proposed DPGA with MapReduce model is carried out.To estimate the efficiency, NSGA-II with MapReduce model is also implemented.The metrics considered for performance evaluation are execution time, makespan and flowtime. Simulation Environment The simulation is designed by writing programs in Hadoop MapReduce using Java.Hadoop1.2.1 stable version is used to set up 4 node cluster, which is backed up by HDFS.Hadoop cluster is running on Ubuntu Linux platform and Java 1.6 is used for writing the code.All the 4 systems have i5 processors, 4GB RAM and 500GB hard disk.The proposed method is evaluated based on factors available in Table 1.Random generation with uniform distribution is used for simulation.Resources can execute a task at a time.ETC matrix is generated depends on three metrics: task and resource heterogeneity and consistency.The various instances are labeled as x-yy-zz that represented as follows, x-consistency type (co-consistent, ic-inconsistent, sc-semi consistent).yy-task heterogeneity (hi-high, lo-low).zz-processor heterogeneity (hi-high, lo-low). Result Discussions The algorithm DPGA with MapReduce and NSGA-II with MapReduce model apply to all 12 problem instances. To compare the performance of multi-objective scheduling algorithm in a distributed computing environment, the Pareto optimal solutions produced by the two methods are plotted in Figures 4-7 for all the instances.For a better comparison of both the algorithm, each was run 10 times repeatedly with various random seeds and the best solutions are considered for both DPGA with MapReduce and NSGA-II with MapReduce.The makespan and mean flowtime are deliberate in same time units and obtained Pareto optimal solutions are plotted on a scale of ten thousands of time unit.The plotted graphs indicate that DPGA with MapReduce produces best schedule in terms of the minimization of makespan and flowtime for all cases compared to NSGA-II MapReduce method.The algorithms are run for 1000 iterations and 100 initial populations were taken.It is also noted that the number of solutions obtained in DPGA with MapReduce increases by increasing number of population and number of iterations.Figures 4-7 show that DPGA with MapReduce model generates better makespan and mean flowtime for all consistency type, task and processor heterogeneity. Performance Comparison of NSGA-II and DPGA The Pareto optimal solution set obtained by NSGA-II, DPGA satisfy different objectives to some extend [26].A fuzzy based technique is used to select best compromise solution from the attained non-dominated set of solutions [27].The fuzzy sets are defined using triangular membership function. The value of membership function indicates how far a non-dominated solution has satisfied the objective.In order to measure the performance of each solution to satisfy the objective, the sum of membership function values µ k is computed, where 1, 2, , k m =  objectives.The performance of each non-dominated solution can be rated with respect to the entire N non-dominated solutions by normalizing its performance over the sum of the ability of N non-dominated solutions as follows, where n is the amount of solution and m is the amount of objective functions.The solution has the best value of µ i is the solution.The makespan and mean flow time value for the finet adjustment solution attained is listed in Execution Time Comparison of NSGA-II and DPGA NSGA-II with MapReduce and DPGA with MapReduce is executed in single node and 4 node Hadoop cluster.The time taken by both the algorithms to find the optimal schedule is listed in Table 3.It is noted that DPGA with MapReduce has less execution time than NSGA-II with MapReduce.As the number of nodes in a Hadoop cluster is increased, the execution time of these algorithms will be reduced. Conclusion In distributed computing systems, allocation of tasks to the processing element with minimum amount of time is a key step for better utilization of resources.In this paper, NSGA-II with MapReduce and DPGA with MapReduce is implemented in DS environment and their makespan, mean flow time and execution time are compared.From the obtained results, it is noted that DPGA with MapReduce model achieves a reduction in makespan, mean flow time and execution time by 12%, 14% and 13%.The simulation results also show that the execution time of these algorithms will be reduced, while increasing the number of nodes in a Hadoop cluster.Future work could be extended for implementing a scheduler by using all the evolutionary kind of algorithms with the Ma-pReduce model to execute its parallel without coordination issue. Consider f max and f min are maximum and minimum values of each objective function, solution in the k th objective function of a Pareto set f k is described by a membership function µ k illustrated as, Table 2 . The percentage of reduction in makespan and mean flow time of DPGA with MapReduce over NSGA-II with MapReduce is calculated as, Table 2 . DPGA with MapReduce achieves a reduction in makespan and flowtime by 12% and 14%, over the values of NSGA-II with MapReduce.Figures 4-7 also indicates DPGA with MapReduce outperforms over NSGA-II with MapReduce. Table 2 . Comparison of NSGA-II with MapReduce and DPGA with MapReduce. Table 3 . Comparison of execution time.
2017-11-07T18:21:36.950Z
2016-05-04T00:00:00.000
{ "year": 2016, "sha1": "e13e78d22cac0f7dfb88c89f96f26c3a6c09afed", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=66417", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e13e78d22cac0f7dfb88c89f96f26c3a6c09afed", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256825708
pes2o/s2orc
v3-fos-license
Acquired von willebrand syndrome in patients with Philadelphia-negative myeloproliferative neoplasm Background Acquired von Willebrand syndrome (AVWS) has not been investigated in Korean patients with Philadelphia chromosome-negative myeloproliferative neoplasm. Methods This study analyzed the prevalence at diagnosis and clinical features of AVWS in patients with essential thrombocythemia (ET), polycythemia vera (PV), prefibrotic/early primary myelofibrosis (pre-PMF), or overt PMF (PMF) diagnosed between January 2019 and December 2021 at Chungam National University Hospital, Daejeon, Korea. AVWS was defined as below the lower reference limit (56%) of ristocetin cofactor activity (VWFRCo). Results Sixty-four consecutive patients (36 with ET, 17 with PV, 6 with pre-PMF, and 5 with PMF; 30 men and 34 women) with a median age of 67 years (range, 18‒87 yr) were followed for a median of 25.1 months (range, 2.6‒46.4 mo). AVWS was detected in 20 (31.3%) patients at diagnosis and was most frequent in ET patients (41.4%), followed by patients with pre-PMF (33.3%) and PV (17.6%) patients. VWFRCo was negatively correlated with the platelet count (r=0.937; P=0.002). Only one episode of minor bleeding occurred in a patient with ET and AVWS. Younger age (<50 yr) [odds ratio (OR), 7.08; 95% confidence interval (CI), 1.27‒39.48; P=0.026] and thrombocytosis (>600×109/L) (OR, 13.70; 95% CI, 1.35‒138.17; P=0.026) were independent risk factors for developing AVWS. Conclusion AVWS based on VWFRCo was common in patients with ET and pre-PMF, but less common in patients with PV in the Korean population. Clinically significant bleeding is rare in these patients. Von Willebrand disease (VWD) is a common genetic bleeding disorder that affects men and women equally and often manifests itself as mucosal bleeding. This can be due to quantitative (Types 1 and 3) or qualitative (Type 2) defects in the von Willebrand factor (VWF) [4]. Acquired VWD (AVWD) or AVWS can occur in many settings, such as MPN, plasma cell dyscrasia, other lymphoproliferative dis-orders, autoimmune disorders, and causes of increased shear forces, such as aortic stenosis or other structural heart diseases and mechanical circulatory support [5]. Depletion of VWF, particularly high molecular weight multimers, can cause mucocutaneous bleeding and arteriovenous malformations, especially in the gastrointestinal tract [5,6]. Increased platelet count in patients with MPN is believed to be associated with AVWS. This is most frequently reported in patients with ET, but also in patients with PV [5,7]. Due to the risk of bleeding, many physicians hesitate to prescribe low-dose aspirin in patients with extreme thrombocytosis and AVWS [8]. AVWS has rarely been addressed in Korean patients with Ph -MPN. Only one case of AVWS has been reported in a patient with PV [9]. In 2020, the Korean Society of Hematology revised the guidelines for diagnosing and managing MPN based on published evidence and the experience of an expert panel. However, AVWS was not addressed [10]. Therefore, the prevalence and clinical features of AVWS were based on reports from Western countries. The incidence of MPN is increasing in Korea [11][12][13], which is mainly attributable to changes in diagnostic criteria and studies on driver gene mutations, so there is a need for information on AVWS in these patients. In this prospective observational study, we analyzed the prevalence at diagnosis and clinical features of AVWS in Ph -MPN patients in a Korean population. Patients Patients diagnosed with ET, PV, pre-PMF, or PMF between January 2019 and December 2021 at the Chungnam National University Hospital, Daejeon, Korea, were enrolled in this study. At the time of diagnosis, the patients underwent tests for the following parameters: complete blood count (CBC), prothrombin time (PT), activated partial thromboplastin time (aPTT), VWF factor VIII-related antigen (VWF:Ag), VWF ristocetin cofactor activity (VWF:RCo), blood chemistry, driver gene mutations and bone marrow examination. In patients with low VWF:RCo at diagnosis, VWF:Ag and VWF:RCo were followed up after initiating cytoreductive treatment at weeks 2 and 6, and then every 1-3 months. The International Prognostic Score for Essential Thrombocythemia (IPSET) [14] and the International Prognostic Scoring System (IPSS) [15] were used for the prognostic stratification of patients with ET and PMF, respectively. Hydroxyurea or anagrelide was used for cytoreduction according to standard recommendations, drug availability, and compliance. Patients with PV patients underwent regular phlebotomy, so their hematocrit was <45%. Low-dose aspirin (100 mg/day) was prescribed to prevent thrombosis, except in low-and very low-risk patients. Based on risk stratification, aspirin was not administered to patients with VWF:RCo <30% who did not require cytoreduction. This study was approved by the Institutional Review Board of the Chungnam National University Hospital. Definition of AVWS Based on previous studies on AVWS in patients with MPN [16,17], AVWS was diagnosed when all four criteria were met: VWF:RCo <56% (lower reference limit), VWF:RCo/VWF:Ag ratio <0.7, no personal or family history of bleeding disorders, and no history of frequent or abnormally prolonged episodes of bleeding. VWF:Ag and VWF:RCo were determined using an enzyme-linked immunosorbent assay and a fixed platelet aggregation assay, respectively (GC Labs, Yongin, Korea). Definition and classification of splenomegaly Splenomegaly was defined using previously described criteria [18]. Briefly, 'palpable splenomegaly' indicated that the spleen was palpable below the left costal margin, and 'volumetric splenomegaly' indicates that the volume of the spleen was greater than the mean plus three standard deviations of the reference volumes based on both age and body surface area. Driver gene mutation analyses The Janus kinase 2 mutation (JAK2V617F) was identified using a quantitative allele-specific real-time polymerase chain reaction (PCR). A calreticulin (CALR) mutation was detected in exon 9 by fragment analysis and Sanger sequencing. The myeloproliferative leukemia gene mutation (MPLW515K/L) was evaluated using PCR and Sanger sequencing. Definitions of thrombotic and hemorrhagic events Thrombotic events included cerebrovascular (ischemic stroke, transient ischemic attack, and venous sinus thrombosis), coronary (any ischemic heart disease, including acute coronary syndrome), splanchnic and peripheral thromboembolism. Bleeding was defined based on guidelines proposed by a group of VWD experts [19]. Major bleeding included episodes that required hospital admission, surgical intervention, blood transfusion, hemoglobin drop ≥2 g/dL, bleeding involving critical areas (e.g., intracranial, intraspinal, intraocular, retroperitoneal, intraarticular, pericardial, or intramuscular with compartment syndrome), or recurrent bleeding that affects the ability to attend normal school, work, or social activities. Minor bleeding included episodes other than major bleeding. Statistical analysis Descriptive data are presented as means±standard deviations (SD), medians (range), or percentages, and were compared using Student's t-test, the chi-square test, or Fisher exact test. Correlations between VWF:RCo and various parameters were assessed using Pearson correlation analysis. The risk factors for the development of AVWS were determined using logistic regression analysis. Statistical analyzes were performed using SPSS software (version 24.0; IBM Corp., Armonk, NY, USA). P <0.05 was considered Patient characteristics During the study period, 64 patients were diagnosed with MPN. All 64 consecutive patients (36 with ET, 17 with PV, 6 with pre-PMF and 5 with PMF; 30 men and 34 women) with a median age of 67 years (range, 18-87 yr) were enrolled. The patients were followed for a median of 25.1 months (range, 2.6 to 46.4 mo). Palpable splenomegaly was not observed in any of the patients. Volumetric splenomegaly was detected in all patients with pre-PMF and PMF , while it was found in 88.2% of patients with PV and 66.7% of patients with ET. WBC counts and lactate dehydrogenase (LDH) levels of patients with pre-PMF were higher than those of patients with ET [15.9±3.3×10 9 /L and 10.9±4.7×10 9 /L; Clinical characteristics of patients with AVWS The clinical characteristics at the time of diagnosis and the clinical course of patients with AVWS were compared with patients without AVWS. Patients with ET with AVWS (N=15) were younger than those without AVWS (N=21) [56 yr and 65 (40-84) yr; P=0.035]. Volumetric splenomegaly, WBC count, monocyte count, hemoglobin level, LDH, PT, and aPTT did not differ between the two groups. The platelet counts of ET patients with AVWS were higher than those without. However, the difference was not significant (949.7±440.0×10 9 /L and 813.7±170.7×10 9 /L; P=0.205). The positivity for JAK2V617F positivity and the burden of alleles did not differ between the two groups. Thrombotic vascular events were detected in 13.3% of ET patients with AVWS and 19.0% of those without AVWS (P =0.650). No major bleeding was observed in any of the patients. Only one episode of minor bleeding (hematoma at the intramuscular injection site) occurred in a patient with ET and AVWS. Of the 15 patients with ET with AVWS, 11 underwent cytoreductive therapy. VWF:RCo normalized in all 11 patients after a median of 8 weeks (range, 2-18 wk) (Fig. 2, Table 3). Three patients with ET had VWF:RCo <30% at the time of diagnosis. These patients had significantly higher platelet counts than those with VWF:RCo ≥30% (1,228.3±824.8×10 9 /L and 837.8±228.8×10 9 /L; P=0.037) ( Table 3). Patients with PV with AVWS (N=3) had sig-Ik-Chan Song, et al. Table 4). The low VWF:RCo in the two patients with pre-PMF normalized after initiating cytoreductive treatment. No bleeding episodes were observed in the patients with pre-PMF (data not shown). Thrombocytosis persisted in many patients when the VWF:RCo ratio was normalized (≥450×10 9 /L in 60.0%; ≥600×10 9 /L in 40.0% of the patients) (Fig. 3). DISCUSSION This study investigated the frequency of low VWF activity at the time of diagnosis and its clinical relevance in 64 newly diagnosed patients with Ph -MPN. To our knowledge, this is the first study to address Ph -MPN-related AVWS in the Korean population. Ph -MPN are characterized by frequent thrombotic vascular events and, to a lesser degree, bleeding. The clinical features of thrombotic vascular events in Korean patients with Ph -MPN differ somewhat from those in Western cases regarding the time of occurrence and sites involved [1]. Therefore, it is reasonable to obtain information on VWF abnormalities in Korean patients with Ph -MPN. Many previous reports on AVWS include the results of VWF tests not only at the time of MPN diagnosis, but also during follow-up [16,17,20]. Cytoreductive treatment strongly af- fects VWF activity [21]; therefore, testing VWF after treatment initiation could lead to an erroneous prevalence rate of AVWS. The present study enrolled only newly diagnosed patients to determine the exact prevalence of AVWS. AVWS was detected in 41.7% of patients with ET and 17.6% of patients with PV, and clinically significant bleeding was rarely observed. Individuals with type O generally have lower plasma levels of VWF than those with other blood types [22]. The present study did not incorporate blood type into defining AVWS; therefore, the prevalence of AVWS may have been overestimated. Estimates of AVWS in Western studies are 20% to 70% in ET and 12% to 30% in patients with PV according to laboratory criteria, but clinically significant bleeding is much less common, at 4% to 7% in some studies, affecting less than half of those with a laboratory diagnosis [5,7]. Therefore, the prevalence of AVWS and bleeding in Korean patients did not differ from those in cases of Western patients. AVWS has seldom been reported in patients with PMF. We also did not diagnose AVWS in any of the five patients with PMF enrolled in the study. However, our data cannot be considered conclusive due to the limited number of patients. We found the abnormal VWF:RCo ratio normalized after starting cytoreductive treatment in all patients, although platelet count remained high. This correction of VWF:RCo after cytoreduction could be a reason why bleeding was uncommon, as most of the patients with ET and PV in this study were placed on cytoreductive treatment according to standard risk stratification. Most reports of major bleeding in patients with MPN with AVWS are based on anecdotes, mainly in patients with unrecognized AVWS [23][24][25][26]. These observations indicate that most hemorrhagic events induced by AVWS occur before or at the time of MPN diagnosis and that proper cytoreduction can minimize the risk of bleeding. Taken together, it remains to be determined whether AVWS screening should be performed in all patients with Ph -MPN at the time of diagnosis. A subpopulation of patients with ET present with a platelet count ≥1,000×10 9 /L, which is arbitrarily called extreme thrombocytosis [27]. Extreme thrombocytosis is believed to be a prerequisite for AVWS [28]. Bleeding diathesis in ET or PV is currently believed to be multifactorial in etiology [29]. AVWS in patients with ET or PV is characterized by the loss of large VWF multimers, related to their increased adsorption to platelets and thus increased proteolysis by ADAMTS13 in a platelet count-dependent manner [21, [30][31][32]. Therefore, the guidelines recommend using aspirin with caution in both ET and PV in the presence of extreme thrombocytosis, as it promotes the development of AVWS [29,33]. However, AVWS can occur even when the platelet count is <1,000×10 9 /L [16,17,34]. In the present study, only 16.7% of patients with ET and 11.8% of patients with PV had platelet counts ≥1,000×10 9 /L. Therefore, many patients with AVWS had platelet counts <1,000×10 9 /L. These data support the suggestion that evaluating the laboratory AVWS is recommended in the presence of abnormal bleeding, regardless of platelet count [29]. In a previous study, younger age, platelet count, hemoglobin level and JAK2V617F mutation independently predicted the development of AVWS among patients with ET , while only platelet count predicted its development among patients with PV [17]. In the present study, younger age (<50 yr) and high platelet counts (≥600×10 9 /L) were independent risk factors for developing AVWS in patients with Ph -MPN. AVWS has been reported even in reactive thrombocytosis [28]. On the contrary, of the 21 patients with chronic myeloid leukemia and platelet count ≥450×10 9 /L (≥600×10 9 /L in 10 patients), only one (4.8%) had a low VWF:RCo (personal observation). As mentioned above, most of the patients with AVWS in the present study had platelet counts <1,000× 10 9 /L, and cytoreductive treatment normalized VWF:RCo even when platelet count was not normalized. Taken together, thrombocytosis is the mainstay for developing AVWS; however, more is involved. Platelet activation has been suggested to be involved in this diathesis [35]. The JAK2V617F mutation per se or its allele burden did not affect the occurrence of AVWS in the present study. However, these observations need to be validated in further studies with more patients. In conclusion, AVWS defined on VWF:RCo was common in patients with ET and pre-PMF, and less common in patients with PV in the Korean population. No major bleeding was observed in any of the patients. To determine the role of AVWS screening at the time of MPN diagnosis, further characterization of this thesis is warranted in future studies that recruit many patients. AuthorsÊ Disclosures of Potential Conflicts of Interest No potential conflicts of interest relevant to this article were reported.
2023-02-14T06:18:04.428Z
2023-02-08T00:00:00.000
{ "year": 2023, "sha1": "793cb021f0791d0516641beba2ca1c59ab5c155d", "oa_license": "CCBYNC", "oa_url": "https://www.bloodresearch.or.kr/journal/download_pdf.php?doi=10.5045/br.2023.2022218", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9aedf55c8f7240213a9805792822d9ed89e2d6d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91076709
pes2o/s2orc
v3-fos-license
PO-242 Myoferlin controls mitochondrial structure and metabolism in pancreatic ductal adenocarcinoma, and affects tumor aggressiveness Introduction Pancreatic ductal adenocarcinoma (pdac) is the most common type of pancreatic cancer, and the third leading cause of cancer related death. therapeutic options remain very limited and are still based on classical chemotherapies. cell fraction can survive to the chemotherapy and is responsible for tumor relapse. it appears that these cells rely on oxydative phosphorylation (oxphos) for survival. Myoferlin, a membrane protein involved in cell fusion was recently shown by our laboratory to be overexpressed in pancreatic cancer. Material and methods We used pancreatic cancer cell lines depleted in myoferlin to assess mitochondrial function with an extracellular flux analyser. pancreas cancer samples from the institutional biobank with matched pet scan data were used to correlate myoferlin abundance and glycolysis.results and discussionsin the present study, we discovered that myoferlin was more expressed in cell lines undergoing (oxphos) than in glycolytic cell lines. in the former cell lines, we showed that myoferlin silencing reduced oxphos activity and forced cells to switch to glycolysis. the decrease in oxphos activity is associated with mitochondrial condensation and network disorganization. an increase of dynamin-related protein (drp)-1 phosphorylation in myoferlin-depleted cells led us to suggest mitochondrial fission, reducing cell proliferation, atp production and inducing autophagy and ros accumulation. electron microscopy observation revealed mitophagy, suggesting mitochondrial alterations. To confirm the clinical importance of myoferlin in pdac, we showed that low myoferlin expression was significantly correlated to high overall survival. myoferlin staining of pdac sections was negatively correlated with several 18fdg pet indices indicating that glycolytic lesions had less myoferlin. these observations are fully in accordance with our in vitro data.conclusionas the mitochondrial function was associated with cell chemoresistance, the metabolic switch induced by myoferlin silencing could open up a new perspective in the development of therapeutic strategies. among them, targeting functional domains (c2, dysf, …) of myoferlin should be a priority. Introduction Pancreatic ductal adenocarcinoma (PDAC) is the most common type of pancreatic cancer, and the third leading cause of cancer related death. Therapeutic options remain very limited and are still based on classical chemotherapies. Cell fraction can survive to the chemotherapy and is responsible for tumor relapse. It appears that these cells rely on oxydative phosphorylation (OXPHOS) for survival. Myoferlin, a membrane protein involved in cell fusion was recently shown by our laboratory to be overexpressed in pancreatic cancer. Material and methods We used pancreatic cancer cell lines depleted in myoferlin to assess mitochondrial function with an extracellular flux analyser. Pancreas cancer samples from the institutional biobank with matched PET scan data were used to correlate myoferlin abundance and glycolysis. Results and discussions In the present study, we discovered that myoferlin was more expressed in cell lines undergoing (OXPHOS) than in glycolytic cell lines. In the former cell lines, we showed that myoferlin silencing reduced OXPHOS activity and forced cells to switch to glycolysis. The decrease in OXPHOS activity is associated with mitochondrial condensation and network disorganization. An increase of Dynaminrelated protein (DRP)-1 phosphorylation in myoferlin-depleted cells led us to suggest mitochondrial fission, reducing cell proliferation, ATP production and inducing autophagy and ROS accumulation. Electron microscopy observation revealed mitophagy, suggesting mitochondrial alterations. To confirm the clinical importance of myoferlin in PDAC, we showed that low myoferlin expression was significantly correlated to high overall survival. Myoferlin staining of PDAC sections was negatively correlated with several 18FDG PET indices indicating that glycolytic lesions had less myoferlin. These observations are fully in accordance with our in vitro data. Conclusion As the mitochondrial function was associated with cell chemoresistance, the metabolic switch induced by myoferlin silencing could open up a new perspective in the development of therapeutic strategies. Among them, targeting functional domains (C2, Dysf, …) of myoferlin should be a priority. PO-243 UNCOUPLING FOXO3A MITOCHONDRIAL AND NUCLEAR FUNCTIONS IN CANCER CELLS UNDERGOING METABOLIC STRESS AND CHEMOTHERAPY 1 Introduction While aberrant cancer cell growth is frequently associated with altered biochemical metabolism, normal mitochondrial functions are usually preserved and necessary for full malignant transformation. The transcription factor FoxO3A is a key determinant of cancer cell homeostasis, playing a dual role in survival/death response. We recently described a novel mitochondrial arm of the AMPK-FoxO3A axis in normal cells upon nutrient shortage. Material and methods After extensive characterisation of mitochondrial FoxO3A function in vitro in several cell lines and tumours, we generated FoxO3A-knockout cancer cells with the CRISPR/Cas9 system and reconstituted FoxO3A expression with wild-type or mutant vectors. Results and discussions Here we show that in metabolically stressed cancer cells, FoxO3A is recruited to the mitochondria through activation of MEK/ERK and AMPK which phosphorylate serine 12 and 30, respectively, on FoxO3A N-terminal domain. Subsequently, FoxO3A is imported and cleaved to reach mitochondrial DNA, where it activates expression of the mitochondrial genome to support mitochondrial metabolism and cell survival. Using FoxO3A -/cancer cells generated with the CRISPR/Cas9 genome editing system and reconstituted with FoxO3A mutants being impaired in their nuclear or mitochondrial subcellular localization, we show that mitochondrial FoxO3A promotes survival in response to metabolic stress. In cancer cells treated with chemotherapeutic agents, accumulation of FoxO3A into the mitochondria promoted survival in a MEK/ERK-dependent manner, while mitochondrial FoxO3A was required for apoptosis induction by metformin. Conclusion Elucidation of FoxO3A mitochondrial vs. nuclear functions in cancer cell homeostasis might help devise novel personalised therapeutic strategies to selectively disable FoxO3A pro-survival activity and manipulate cellular metabolism to counteract cancer initiation and progression. CATECHOL-O-METHYLTRANSFERASE: A DUAL-ROLE PLAYER IN DIFFERENT BREAST CANCER SUBTYPES? 1 L Janáčová, 1,2 J Faktor, 1 L Čápková, 3 L Knopfová, 3 P Beneš, 4 P Fabian, 1 P Bouchal*. Introduction Catechol-O-methyltransferase (COMT) plays an essential role in detoxification of catechols by transferring the methyl group from S-adenosyl-l-methionine to the substrate. In breast cancer, it catalyses methylation of oestrogen metabolites to block their oestrogenicity which prevents their oxidation to carcinogenic quinones. In this study we investigated whether its tumour suppressor role is limited to oestrogen receptor dependent breast cancer, or whether it has a general validity. Material and methods A differential cell surface proteomics analysis with SILAC-LC-MS quantification was performed on MDA-MB-231 breast cancer cell line and its clone selected for higher migration capacity. Analysis of migration and invasiveness of MCF7 cells stably transfected with COMT was performed using Transwell assay. The protein-level expression of COMT in different breast cancer subtypes was determined by Abstracts ESMO Open 2018;3(Suppl 2):A1-A463 A115
2019-04-02T13:14:52.824Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "22cf603edfc605b6c0a45e91cfae67fdc8354228", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1136/esmoopen-2018-eacr25.275", "oa_status": "GOLD", "pdf_src": "BMJ", "pdf_hash": "e1e27a18088f1124403ceca0b78a34979e3c1d46", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
219105553
pes2o/s2orc
v3-fos-license
Accuracy assessment of Global Human Settlement Layer (GHSL) built-up products over China Building a density map over large areas could provide essential information of land development intensity and settlement condition. It is crucial for supporting studies and planning of human settlement environment. The Global Human Settlement Layer (GHSL) is a comprehensive data set of mapping human settlement at a global scale, which was produced by the Joint Research Centre (JRC), European Commission. The built-up density is an important layer of GHSL data set. Currently, the validation of the GHSL built-up area products was preliminarily conducted over the United States and European countries. However, as a typical East Asian region, China is quite different from the United States, Europe, and other regions in terms of building forms and urban layouts. Therefore, it is necessary to perform an accuracy assessment of GHSL data set in Asian countries like China. With individual building footprint data of 20 typical cities in China, this paper presents our effort to validate the GHSL built-up area products. The aggregation mean and neighborhood search based algorithms are adopted for matching building footprint data and the GHSL products, through the regression analysis at per-pixel level, the building density map in raster format are generated as validation data. The accuracy index of GHSL built-up area was calculated for the study areas, and the validation methods were explored for GHSL built-up products at large scale. The results show that the built-up layer aggregated by the building footprint have the highest correlation with the coarse resolution GHSL built-up products, but GHSL tends to underestimate the building density of low-density areas and overestimate the areas with high density. This study suggests that GHSL built-up area products in 20 representative Chinese cities of China could provide quantitative information about built-up areas, but the product accuracy still need to be improved in the regions with heterogeneous formations of human settlements like China. There is a big picture of mapping high accuracy built-up density of China with the training data set acquired by the study. Introduction Land cover is an important factor of environmental studies of the earth surfaces [1][2][3], since the land cover/land use change, environmental pollution, land degradation and loss of biodiversity have become increasingly serious. Thus, timely and reliable global land cover data has become an important data set for ecosystem assessment, environmental modeling, etc. [4]. Urbanization is one of the most significant factors that human influence the land cover of earth surfaces. Presently, more than 50% of the world's population live in urban areas, compared with which is only 30% in 1950, and it is projected that the urban population will account for 66% of the world population by 2050 [5]. Although cities cover only a relatively small portion of the earth surfaces, studies in urban areas play a crucial role in human housing demand, climate change and response, disaster risk prevention, urban development and other sustainable development goals [6]. Currently, it is well addressed that remote sensing technology is a promising solution for large-area observation [7][8][9]. Satellite remote sensing has become an important means of obtaining information on land surfaces [10][11][12]. As the high-resolution remote sensing satellite data can provide detailed urban surfaces, Xu investigated the land cover information extraction using IKONOS panchromatic data with 1m resolution [13]. In [14], the principal component analysis is employed to fuse the texture and structure features derived from Landsat-7 ETM+ panchromatic data to extract the building information. Currently, the Global Urban Footprint (GUF) dataset is produced for urban mapping, it is based on the satellite SAR imagery acquired by the German satellites TerraSAR-X and TanDEM-X. With a fully automated processing system, global coverage of more than 180,000 very high-resolution SAR images with 3m ground resolution, mainly acquired between 2010 and 2013, were processed, the scattering amplitude is combined with the derived texture information to depict the human settlement. In addition, auxiliary data such as digital elevation models were fused with the SAR images to improve the classification accuracy [15]. The Global Human Built-up And Settlement Extent (HBASE) Dataset is a global scale product derived from the Global Land Survey (GLS) Landsat dataset for the year of 2010 [16], the product is only for the mapping and monitoring of urbanization. The Global Human Settlement Layer (GHSL) produced by the Joint Research Centre (JRC) provides much more detailed information on the growth of buildings and populations over the past 40 years [17]. The products contain comprehensive data layer for urbanization assessment, land cover change, urban planning and management [6], species changes studies [18], but the accuracy of the product needs further validation. With urban building density information, many applications would be able to carry out, e.g. the economic development of the city and the expansion of urban space. Combining with urban lighting data and urban traffic data, the future urban development can also be predicted, which is of great significance to integrate urban management and improvement of the urban environment [12,[19][20][21]. However, there is no well validated global-scale human settlement mapping products for ecological environment studies [22]. Therefore, the development of large-scale building density remote sensing products, the formation of large-scale and long time series of remote sensing mapping products has become an urgent need for both research and applications. The GHSL built-up area products are promising data collection set for characterizing the built-up area at large scale. However, GHSL is an experimental product, it is currently validated only in the United States and European countries [10]. Due to differences in population density and building structure between Asia and Europe, the validation results of GHSL builtup area products in the United States and Europe cannot assure the reliability of the accuracy in Asia. Therefore, this paper focuses on the validation and analysis of the accuracy of GHSL built-up area products in 20 representative Chinese cities in China. It is expected to provide reference accuracy information for applications of GHSL data in China. Based on the maps of building footprint derived from open geospatial web service in China, i.e., Baidu Map. The accuracy of the GHSL built-up area products at 250m resolution and 1000m resolution in 20 typical Chinese cities across different provinces were quantitatively assessed. The accuracy of GHSL built-up products in China is evaluated by aggregating the building footprint into building density products with the same resolution as GHSL built-up products. The results demonstrate that there is a certain misestimation in GHSL products over 20 representative Chinese cities. The results are expected to provide quantitative accuracy information of the GHSL built-up products application in China. Study area To better represent the different urban patterns and building forms, this paper selected 20 typical cities as study area located in different administrative regions of China with varieties of economic developments, population densities and physical environments (Fig 1, Table 1). The building density patterns from GHSL in 20 cities of China were validated in this paper. GHSL built-up areas products GHSL is a global scale human settlement map product extracted from Landsat images, it is developed with a classification method based on symbolic machine learning [23,24]. To utilize long term remote sensing data record, GHSL adopts images acquired by the Multi-Spectral Scanner (MSS) and Thematic Mapper (TM), Enhanced Thematic Mapper (ETM+), Operational Land Imager (OIL) and Digital Elevation Model (DEM) for characterizing human settlement. The time series consisted mostly of four epochs, 1975, 1990, 2000, and 2014. The builtup area products in each epoch are provided with the resolution of 38m, 250m and 1000m. Building footprint data The accurate building footprint data acquired for the validation is obtained from the Baidu map (https://map.baidu.com/), the online web map service provided very high resolution building footprint layer as shown in Fig 3. "TIANDITU Imagery" is a comprehensive geographic information service website provided by the China National Surveying and Mapping Geographic Information Bureau. It is loaded with geographic information data covering the whole world in three modes: vector, image and three-dimensional. Therefore, this map is used as a reference. In this study, the building footprint layer selected for validation is acquired in 2017 from Baidu map due to the lack of data in 2014. Data processing The building footprint layer of Baidu map was download and converted to raster data with the same resolution as the GHSL built-up area products. One problem in the spatial alignment of Baidu map with GHSL data is the various local projections of Baidu maps compared to a consistent global projection system of GHSL. We proposed and implemented a specific procedure to address this issue by the following five steps: 1. Vector data generation: Obtain the building footprint data of Baidu map, get the binary map of building outline based on the characteristics of houses, and then get the vector data of the building footprint by mosaicking, filtering, vectorization, and geo-registration. 2. Grid generation: Converting the building footprint vector data into raster data of which the spatial resolution is consistent with that of the GHSL built-up areas product requires vector grid data with the consistent grid position of the GHSL built-up areas. Therefore, the grid data of the GHSL built-up areas products are used to generate the required grid data with 250m resolution and 1000m resolution. 3. Intersection operation: The grid of vector data obtained from step 2 is intercepted with the building footprint vector data to overlay the attributes of the building footprint vector data into each grid. 4. Built-up areas calculation: Built-up area is usually expressed as the occupied proportion of the building footprint area in a unit area. The formula for built-up area is: where P i,j is the proportion of built-up area in each grid, S i, is the building footprint area in the (i, j)th grid, and S is the grid area. 5. Rasterization: Convert the building footprint vector data with building density attributes into raster images, and the built-up areas validation data is produced for this study. The above steps are applied to process building footprint data over 20 cities in China. A square area with size of 18×18km is selected as the study coverage for each city. Validation In this paper, statistical histogram and linear regression are used to verify the results. From the histogram, the distribution of built-up areas is intuitively illustrated, and the gap between the data can be analyzed. Linear regression is applied to quantitatively analyze the dependency relationship. It is expressed as: where e is error and obeys the normal distribution with a mean value of 0 [25]. In this paper, linear regression is used to analyze the relationship between the GHSL builtup area and the built-up area of Baidu maps. And the correlation coefficient is calculated as: Results and discussion Based on the 2014 GHSL built-up area products, this paper processes the 2017 building footprint data obtained from Baidu map and obtains built-up areas validation data of 250m resolution and 1000m resolution. Figs 4 and 5 show the GHSL built-up area maps at the 250m resolution and 1000m resolution of 20 study regions in China, respectively. It can be observed that the density of buildings in the urban center is generally higher than that in the suburban and rural areas, which is more apparent in the product with 250m resolution [26]. Figs 6 and 7 show the built-up area maps obtained from Baidu map building outlines at 250m resolution and 1000m resolution in 20 study areas in China, respectively. Similar to the result observed from Figs 4 and 5, the building density in the urban center is higher. Due to the acceleration of urbanization, the city's building density can vary in different periods. The building density in most cities in China will relatively increase in pace with the development. Generally, the urban suburbs could become the urban core area and the suburbs would expand outwards. Density can change more greatly at the edge of the study area, while in the urban center area it will not change significantly due to limited construction land. However, by comparing Figs 4 and 6, Figs 5 and 7, what we can observe is that the intensities of pixels in GHSL products in 2014 are relatively higher than the values of corresponding pixels in the validation data in 2017 which do not meet the urban development trend. Therefore, it is particularly important to verify the accuracy of GHSL built-up areas data. To reflect the differences between built-up areas of different cities and the differences between GHSL built-up area products (noted as GHSL) and built-up areas of the validation data (acquired from Baidu map and noted as BD) more intuitively. The histograms of different Under the same resolution, the bias is existed in the estimations of urban density between BD and GHSL products across different cities. On the one hand, the GHSL built-up area products has a larger number of pixels with an urban building density higher than 0.8 (Fig 8). And the results are similar at GHSL products with different resolutions. It means that the GHSL tends to overestimate large cities (high-density areas) as a whole and underestimate small cities (low-density areas) compared to BD (Figs 8 and 9). The reason is that the GHSL products with 250m and 1000m resolution are interpolated based on the product of 38m resolution which is affected by the mixed pixel impact in the generation of product. Therefore, the building density of regions with low-density buildings will be underestimated and the areas with high-density buildings will be overestimated. On the other hand, the comparison results of GHSL and BD are different between big cities and small cities under the same resolution. Large cities is generally estimated by GHSL as a high-density area, while small cities with relatively backward economy and population are estimated to be relatively uniform at different building density levels (Fig 8). However, BD generally recognizes the building density of big cities and small cities between 0-0.5, and the building density level of big cities is generally higher than that of small cities in different histogram intervals (Fig 9). This is because the number of pixels of high-density land and low-density land in large cities with higher development level is correspondingly more than that in small cities. Meanwhile, the BD employed for validation (Fig 9) shows that the regions with high building density (above 0.6) correspond to a small number of pixels. Due to the need for part of the land for lighting, urban greening and transportation between buildings, a certain proportion of land will be occupied, thus the histogram distribution of validation data, i.e. BD, is more in line with that of the urban built-up areas in China. In terms of the comparison between GHSL and BD at different resolutions, both GHSL and BD achieve smoothing effect with the decrease of resolution, reducing the contrast between high-density areas and low-density areas (Figs 8 and 9). For GHSL, the extreme trend of both ends of 250m product is more obvious, while 38m high value data will be smoothed out on 1000m scale due to the change of interpolation unit range, thus reducing the proportion of high value overestimation (Fig 8). The smoothing effect is also applicable to BD products for its data is obtained by converting vector data to grid, thus the high-density area greater than 0.6 is smoothed out with the decrease of resolution to 1000m, and the overall pixel value is concentrated between 0-0.5 (Fig 9). In a conclusion, (1) BD data is more in line with China's reality in terms of histogram distribution and comparison with the number of pixels across different cities. (2) There is a certain overestimation of the building density of GHSL products in large cities, compared with the estimated density of medium-sized cities, which has a higher consistency with BD. Therefore, BD products are of great value and significance in correcting the overestimation bias of GHSL to big cities. The pixel values of GHSL built-up area products are higher at different resolutions (250m and 1000m), the data with a saturation value above 0.99 with the resolution of 250m and the data with a saturation value above 0.9 with the resolution of 1000m in GHSL products are considered for quality control according to the distribution of the statistics of GHSL products in all 20 cities. Then the value of GHSL built-up areas and validation data from the cell by pixel with the saturation value removed and the saturation value not removed is compared, as shown in Fig 10 and Fig 11, and the slopes and R 2 are shown in Tables 2 and 3. The results of the regression analysis (Figs 10 and 11) suggest that there is no significant difference in the regression slope between the saturated value and the unsaturated value at 250m resolution and 1000m resolution. However, the R 2 of data with saturation values are higher than the R 2 of data without saturation values, and the correlation of data set with 1000m resolution is higher than that with 250m resolution. The comparison between the results of 1000m resolution and 250m resolution products shows that the slope of 1000m resolution product is relatively smaller. In the regression analysis of this study, p < 0.001. From Table 2, we can find the regression parameters of cities with better economic development are lower, such as Beijing, Shanghai, Shenzhen and etc., which may result from the better development of economics, the rapid expansion of construction, the opposite economic level of the city and that the population is not floating in a small number of cities and the speed of construction expansion is relatively slow. Moreover, the trend line between GDP of each city and corresponding regression slope illustrates that the regression slopes of cities with higher GDP are lower, as shown in Fig 12. In GHSL, the built-up area class is defined as the union of all the spatial units collected by the specific sensor and containing a building or part of it [10]. For the saturation values in the results, it can be explained by the confusion between bare soils in agricultural fields and builtup areas, the appearance similarity between the ridgeline and the building footprints, and that highway (especially asphalt concrete roads) can be misclassified as built-up areas when classifying built-up areas. GHSL built-up area products tend to overestimate the building density over areas with high density since the Landsat image used is of 30m resolution, under which condition single buildings and small settlement patterns surrounded by vegetation may be difficult to identify. The use of the data with high resolution may help in alleviating the problem of confusion between built-up areas and other types of land cover such as artificial open spaces, river gravel and sand dunes [10]. Therefore, the product at 1000m resolution has the best validation result. The validating data used in this paper contains only the building footprints, with no other non-building data existing. In summary, although the R 2 of regression results are not very good, GHSL products have a certain effect on China's instructions for built-up areas. In terms of the applications of this product in China, it may not be able to meet the high accuracy requirements of building density, but it is suitable for the wide-range study of low resolution. Conclusions Aims to assess the accuracy of GHSL built-up products in China, 20 typical cities across entire China were selected as study sites. The quantitative assessment of the GHSL built-up products was carried out over different types of cities, it is expected to provide a reference of applications of the GHSL products in the dense urban area over East Asia, especially in China. With the report of the assessment, we can conclude that the GHSL built-up products are a promising product for characterizing building density, and the pixel values have a good correlation with the ground truth. However, the comparison with the assessment in the United States and European countries suggest that the significant difference between the regression slopes. Builtuparea per tiles from European and United States as reference data and the GHSL layer has been compared with a regression slope of 0.2164 [17], which is lower than in China (Table 3). The reason for this phenomenon is that the cities in China have much higher building density and height, the shadow effect would be much significant, which will affect the estimation of building density with remotely sensed images [27]. In addition, the Landsat images with 30m spatial resolution were used by JRC in generating the GHSL built-up product. It means individual building and building clusters surrounded by dense vegetation cannot be accurately detected, and the building density was underestimated over where the areas with low buildings. The existence of the mixture pixels which contain signals of roads, bare grounds and buildings would also affect the accuracy of estimation. In summary, the quantitative assessment over 20 cities in China suggests that the GHSL built-up products have a good correlation with ground truth, however, it is also observed that the products need to be further validated and improved in dense urban areas, especially in East Asia like China. The future work would be focused upon the investigation of estimation models of building density in both sparse and dense urban environment with time series and multi-source data, and eventually develop more generative models for generating building density products over large areas with an operational manner.
2020-05-31T13:05:13.476Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "81ba776e33a4ea8840a8ef4ba95969ca8e056118", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0233164&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c04d5e9edc2b10be2bdad873b036e596d616476", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
73676976
pes2o/s2orc
v3-fos-license
MEASUREMENT OF PREFERENTIAL FLOW DURING INFILTRATION AND EVAPORATION IN POROUS Infiltration and evaporation are governing processes for water exchange between soil and atmosphere. In addition to atmospheric supply or demand, infiltration and evaporation rates are controlled by the material properties of the subsurface and the interplay between capillary, viscous and gravitational forces. This is commonly modeled with semi-empirical approaches using continuum models, such as the Richards equation for unsaturated flow. However, preferential flow phenomena often occur, limiting or even entirely suspending the applicability of continuum-based models. During infiltration, unstable fingers may form in homogeneous or heterogeneous porous media. On the other hand, the evaporation process may be driven by the hydraulic coupling of materials with different hydraulic functions found in heterogeneous systems. To analyze such preferential flow processes, water distribution was monitored in infiltration and evaporation lab experiments using neutron transmission techniques. Measurements were performed in 2D and 3D, using homogeneous and heterogeneous setups. The experimental findings demonstrate the fingering effect in infiltration and how it is influenced by the presence of fine inclusions in coarse background material. During evaporation processes, the hydraulic coupling effect is found to control the evaporation rate, limiting the modeling of water balances between soil and surface based on surface information alone. Introduction A significant part of water exchange between the subsurface and the atmosphere occurs through infiltration and evaporation in the unsaturated zone.Infiltration and evaporation are governed concurrently by relevant flow and transport processes both in the atmosphere and the subsurface, posing thus an inherent complexity when it comes to the description and prediction of such systems.For a given atmospheric supply or demand, infiltration and evaporation processes in the subsurface are controlled by the soil properties and the interplay between capillary, viscous and gravitational forces.Ψηφιακή Βιβλιοθήκη Θεόφραστος -Τμήμα Γεωλογίας.Α.Π.Θ. In order to achieve predictions for such problems, these effects are commonly described by assigning effective medium properties to the soil (ie.permeability), combined with continuum-based models for flow, such as the Richards equation for the unsaturated zone.Nevertheless, continuum-based approaches have certain limitations, as the substitution of real materials with effective media dictates that some processes vanish during the averaging procedure.Such limitations are revealed when preferential flow phenomena arise and need to be accounted for, as a basic feature of such models is the stability of the solution. A typical example of preferential flow is the formation of unstable wetting fronts during infiltration in initially dry porous media.Instability in wetting fronts is triggered by pore-scale heterogeneity, stemming from different pore sizes and shapes, that determines the local forces and therefore the overall front propagation features in the air-filled pore space.Such effects have therefore been studied on the pore-scale consideration with pore-network models (DiCarlo 2006) and invasion-percolation models (Glass and Yarrington, 1996).On the effective medium scale, preferential flow has been widely investigated with 2D experiments of infiltration in homogeneous porous media using Hele-Shaw cells (Glass et al., 1989;DiCarlo, 2004).Similar 2D experiments have also been conducted using inclusions of fine material in coarse background, showing that heterogeneity tends to locally eliminate the instability in the inclusions (Hill and Parlange, 1972;Sililo and Tellam, 2000;Rezanezhad et al., 2006).Nevertheless, due to the typically fast propagation of infiltration fronts, monitoring preferential flow in 3D has been limited to delineating unstable wetting fronts and finger diameters (Glass et al., 1990;Tullis and Wright, 2007).Saturation measurements have thus been restricted to 2D setups that carry a degree of uncertainty related to the influence of the boundaries on the flow process.In the first part of the work presented here we extend saturation measurements to 3D, discussing whether our 2D knowledge on preferential flow is indeed relevant for the 3D space and focussing on the effect of material interfaces.For that purpose, we measure water saturation distribution in a 3D column during unstable wetting for a homogeneous case as well as for a case with fine material inclusions in a coarse background. Similar limitations of classical continuum models are encountered when preferential flow is induced by evaporation.On the pore scale, evaporation from porous media causes water movement from the larger pores near the drying front to smaller pores near the surface, driven by the capillary pressure difference.This effect, termed as capillary pumping (Yiotis et al., 2001), has extensively been investigated from a pore-scale consideration (Prat, 1993;Prat, 2002;Yiotis et al., 2004).Experimental work is also found on the medium scale, using initially wet or party wet homogeneous porous media subject to evaporation (Shokri et al., 2008).Similarly to the pore-scale pumping effect, the effect of hydraulic coupling has been demonstrated on the medium scale using coupled vertical columns of different materials (Lehmann and Or, 2009).In this case, preferential water flow is induced from the coarse-textured to the fine-textured column through the coupling of the materials, sustaining high evaporation rates through the fine-textured surface for longer periods.These observations indicate that, in order to predict evaporation rates from soils, one needs to account for the subsurface structural features and the emerging preferential flow paths additionally to the classical drying front dynamics considerations.In the second part of this work, we investigate the effect of hydraulic coupling by monitoring water saturation distribution during evaporation from a heterogeneous structure that consists of a tortuous fine-textured inclusion embedded in a coarse background. mm height and 100 mm inner diameter (Fig. 1a).The experiment was first performed with a homogeneous packing of a coarse material.Then it was repeated with fine-textured inclusions embedded in the coarse background.The inclusions were lunate-shaped in the horizontal plane and were introduced symmetrically with respect to the centre of the column at z=15 and 30 mm, each having a height of 10 mm.This inclusion configuration left a wide opening of the coarse material towards the bottom.The coarse material chosen was quartz sand with grain sizes ranging from 0.7 to 1.2 mm.Quartz sand with grain sizes from 0.1 to 0.3 mm was used as a fine material.The sands were packed by simply pouring the particles into the column entirely dry, which also corresponds to the initial condition for the infiltration experiments.The base of the column was perforated to avoid the ponding of water at the bottom and to prevent any air pressure build-up inside the porous medium.A fine metal grid sealed the outlets to keep the sand particles from flowing out with water. The top of the column was open to the atmosphere and subjected to a constant infiltration rate of 10 ml/min (Fig. 1a).The infiltration water mass was delivered equally to the sand surface by distributing the inflow into tubes ending at several injection points.Additionally, the top of the structure was covered with a 20 mm thick layer of the fine material, to ensure an initial homogeneous spreading of the water mass in this layer through capillarity and prevent the formation of artificial preferential flow paths near the injection points.The bottom boundary was of free-flow type and any water reaching the bottom was collected in an external container positioned under the column. Evaporation experimental setup The evaporation experiment was carried out in 2D using a Hele-Shaw cell with 280 mm height, 500 mm width and a thickness of 20 mm.The heterogeneous structure consisted of a tortuous fine-textured inclusion connecting bottom and top of the cell in a coarse-textured background (Fig. 1b).As coarse background material, quartz sand with grain sizes ranging from 0.7 to 0.9 mm was used.The fine material was quartz powder with grain sizes in the range of 0 to 0.06 mm with a mean value of 0.012 mm diameter.This resulted in a contrast of the saturated hydraulic conductivity of several orders of magnitude.In order to achieve a fully saturated initial condition, the entire structure was packed under water.The sand was first flushed an immersed in separate glass beakers.Artificial Ψηφιακή Βιβλιοθήκη Θεόφραστος -Τμήμα Γεωλογίας.Α.Π.Θ. variations of porosity during the wet packing were prevented by depositing the wet sand particles from a constant falling distance into the water-filled Hele-Shaw cell and mixing the packed particles each 10-20 mm (Lehmann et al. 2008).Shaping the tortuous inclusion was achieved by placing metal sheets that were slowly shifted after the fine particles had settled.Despite the painstaking procedure, some mixing of the two materials near the boundaries of the inclusion was inevitable due to the large grain size contrast. The top of the Hele-Shaw cell was open to the atmosphere and acted as the evaporative surface (Fig. 1b).A hair-dryer, blowing with a constant fan speed from a fixed distance towards the evaporative surface, was used to manipulate the evaporation rate.This offers the advantage of a faster evaporation process without having any influence on the water distribution inside the porous medium (Shokri et al., 2008). Measurement techniques Water movement inside the porous media was monitored using thermal neutron transmission technology in the NEUTRA station of the Spallation Neutron Source (SINQ) of the Paul Scherrer Institute, Switzerland.The following measurements were performed: • Infiltration experiments (fast process): 2D width-averaged dynamic water distribution by means of fast radiography in time increments of 4 s during the front propagation.Additionally, 3D reconstruction of infiltration patterns by means of tomography obtained by rotational scanning in steps of 2°. • Evaporation experiment (slow process): 2D water distribution by means of slow radiography in time increments of 30 mins for a period of 6.5 d. Measuring water distribution in porous media with neutron imaging is based on relating water saturation to neutron intensity detected on a scintillator behind the scanned medium.The neutron intensity I passing through quartz sand, water and cell or column walls (ie.glass or aluminium) is with the source neutron beam intensity Io, the neutron attenuation coefficients αq, αw, αc and effective thicknesses dq, dw, dc for quartz, water and cell or column wall, respectively.The recorded intensity I was filtered to correct source beam intensity variations and neutron scattering effects.Consequently, the water effective thickness dw can be deduced by comparing each image (intensity I) to a reference entirely wet or dry image (Iref) that thus has a known water thickness dref equal to dwet or zero. In a similar fashion, the coefficient αw can be determined with Eq. ( 1 The term dref/dwet obviously reduces to one or zero, depending on the reference image used. In the infiltration experiment, the water mass in the system was defined by the water inflow prescribed with the balance.For the evaporation experiment, however, the evaporative mass was, naturally, not Ψηφιακή Βιβλιοθήκη Θεόφραστος -Τμήμα Γεωλογίας.Α.Π.Θ. predefined and therefore the total water mass in the Hele-Shaw cell was recorded continuously with digital balances.This was done for the entire measurement period of 6.5 d and was also continued after the end of the saturation measurements until t=13 d.More details are given in Sect.3.2. Infiltration experiments With the application of the infiltration rate, water first distributed in the fine material at the top and saturated it.Consequently, preferential flow in the infiltration experiments occured in terms of unstable finger formation in the coarse material (Fig. 2).The fingers formed at the interface between the fine and the coarse material and then vastly propagated through the coarse material towards the bottom of the column, driven by gravity.It must be stated that fingering here was triggered purely by pore-scale effects and was not initiated with any kind of artificial structures or variations in the infiltration rate. The homogeneous packing (Fig. 2a) resulted in the formation of two fingers.The vertical propagation is illustrated here with a radiography image (left), while cross-sectional neutron intensities delineate the finger area in the column at z=72 and 33 mm (middle and right image).The first finger came in contact with the wall of the column soon after its formation at the fine-coarse interface and then reached the bottom within 22 s, remaining attached to the wall.Nevertheless, the second fin- ger formed in the middle of the column, reaching the bottom within 36 s.These observations indicate that boundaries can, indeed, introduce preferential pathways for water and possibly create or accelerate preferential flow phenomena at the walls.However, fingering also occurs away from the column walls, demonstrating that unstable wetting is also relevant for 3D applications. The heterogeneous case (Fig. 2b) demonstrates the formation of a finger and its behaviour in the presence of the two symmetric, lunate-shaped fine material inclusions.Also in this case, the vertical propagation is illustrated with a radiography image (left) while cross-sections are shown at z=72 and 33 mm (middle and right image).Once more, the finger was formed near the centre of the column (left and middle image), however in this case it was eliminated at the lower region of the column near the fine inclusions (left and right image).This happened despite the fact that the central cross-sectional area was not occupied by fine material due to the symmetric lunate shapes of the inclusions; a configuration that would, in general, allow finger propagation through the centre.However, any hydraulic connection of the finger to the fine material drives water into the inclusion due to the capillary pressure difference, interrupting the finger propagation and forming a smeared-out infiltration front.Through this pronounced pore-scale capillary effect, material interfaces dominate the system and determine the stability of the wetting front.This is visible in the radiography image (left) as well as in cross-sectional image near the inclusions (right).In the cross-section at z=33 mm, the finger is still discernible near the centre of the column, as well as its hydraulic connection to the (at this stage wet) upper inclusion.Despite the fact that the infiltration rate from the top was continued, fingering effects underneath the inclusions were not observed even after the inclusions were saturated.However, this presumably also relates to the limited height of the column. Evaporation experiment The saturation distribution obtained with neutron radiography during the evaporation experiment is given in Fig. 3 for the times t=1, 2, 3, 4, 5 and 6 d.The saturation images also allow a straight-forward derivation of the change of total water mass in the porous medium during the evaporation process.It is thus possible to derive the evaporation rate based on information from the neutron radiography alone.This evaporation rate is compared to the one measured with the digital balances during the experiment in Fig. 4.This procedure is commonly followed as a calibration method for neutron imaging as an additional correction of intensity variation and scattering errors.However, in this case agreement between the radiography-based mass balance and the balance measurements was obtained without any calibration. The evaporation rate is characterized by three periods: (1) and initial period from t=0 to t=0.4 d with rate values fluctuating between 200 and 250 g/d, (2) a falling rate period from t=0.4 to t=1.65 d with a vast decrease of rate from 250 to 60 g/d and (3) a constant rate period from t=1.65 to t=6.5 with slow rate decrease from 60 to 44 g/d.Additionally, the balance measurements indicate that the rate of 44 g/d was maintained until t=9 d.Although the evolution of the evaporation rate resembles the classical consideration of a first (high rates) and a second (low rates) stage of evaporation, the origins of this pattern have to be closely examined, accounting for the fact that this rate stems from a coupled system of two materials.The saturation images show that the top of the fine material remained saturated until the end of the experiment, it is thus safe to conclude that the fine material sustained first stage evaporation.This is, however, not the case for the coarse material.An estimation of the drying front depth that initiates the transition from first to second stage evaporation can be given by the characteristic evaporation length Lcoarse of the coarse material (Lehmann et al. 2008).Derivation of the material characteristic lengths and comparison of this information to the deduced drying front images (Fig. 4) reveals that the drying front reaches depth Lcoarse at t=0.4 d, indicating that this time corresponds to the end of first stage evaporation of the coarse material. Nevertheless, the structure continued to supply high evaporation rates within the range of 44 to 60 g/d until t=9 d.This evaporative mass can be attributed to (i) mass that originated from the drying front and reached the surface as vapour diffusion through the coarse material, (ii) first-stage evaporation of the fine material, which, since the fine material remained saturated throughout the entire meas- urement, translates to a hydraulic coupling effect of the two materials.The evaporative mass supplied through the vapour diffusion mechanism (i) can be estimated based on Penman's model (Penman 1940), using the medium porosity, the diffusion of vapour in free air, the saturated vapour density at the front, the vapour density at the surface, the water content above the front and the drying front depth.Based on this approach, the maximum diffusive flux (ie.for the front position shown in Fig. 4b) obtained during the second stage evaporation from the coarse material in this setup was found in the order of 0.01 g/d.This value corresponds to 1 g/d per m², a finding that agrees with previous observations on diffusive fluxes during second stage evaporation from porous media (Shokri et al. 2008). This diffusive flux is negligible compared to the 44 g/d evaporation rate measured until t=9 d, denoting that the measured rate was mainly sustained through first stage evaporation from the fine material.This strong contrast of contribution of evaporative mass originating on the one hand from the diffusion-driven process and on the other hand from the material coupling, highlights the significance of hydraulic coupling effects during evaporation from heterogeneous porous media.Water is drawn from the coarse material background into the fine material and flows through the tortuous path towards the fine-textured evaporative surface, supplying the evaporative demand.Practically, this effect sustains higher evaporation rates for longer periods than one would predict by neglecting the coupled behaviour of the system.This preferential water flow through the inclusion is driven by the atmospheric demand but is strongly controlled by the contrast of hydraulic properties of the materials as well as the properties of the inclusion geometry, especially with respect to connectivity. Conclusions The first part of the presented work deals with monitoring preferential flow during infiltration in porous media.We present the 3D measurement of water saturation distribution during unstable wetting in initially dry porous media.Two different cases are examined: a homogeneous case using a coarse material and a heterogeneous case with fine material inclusions embedded in the coarse background.The experiments showed that preferential flow phenomena can be significant for 3D problems of infiltration, however, one has to be aware of artificial boundary effects, ie.when investigating such processes in 2D.The existence of fine inclusions eliminated, at least locally, the fingering effect, even though the inclusion configuration used here could, theoretically, allow the finger propagation towards the bottom.This behaviour stems from capillary pressure difference in the materials and reveals that material interfaces play a dominant role in the 3D infiltration process: any hydraulic connection between the finger and the inclusion (even through few pores) can stabilize the wetting front. The second part presents 2D measurements of water saturation distribution during preferential flow induced by hydraulic coupling of materials during evaporation from heterogeneous porous media.The structure under examination consisted of a tortuous fine-textured inclusion, connecting the bottom to the evaporative surface at the top, embedded in a coarse background.The experiment revealed the significance of the coupling effect: even though the end of first stage evaporation for the background material was reached soon, the structure continued to supply high evaporation rates through the fine-textured surface until the entire background material was practically dried out.The observations made here reveal that evaporation rates can be strongly underestimated when neglecting the coupling behaviour of materials in the subsurface.Therefore, in order to achieve predictive modelling of evaporation from soils, it is necessary to account for subsurface structural features of heterogeneity. Fig. 2 : Fig. 2: Finger formation during wetting: (a) in the homogeneously packed coarse material and (b) with lunateshaped inclusions of fine material.The left image illustrates depth-averaged saturation obtained with neutron radiography.Middle and right image qualitatively depict neutron intensities in cross-sections (100 mm diameter) of the column at heights z=72 and z=33 mm, respectively. Fig. 4 : Fig. 4: Evaporation rate determined from the saturation images and the digital balance measurements.Front positions at different phases of the evaporation process (t=0.4,1.65, 3.3, 6.5 days and qualitatively at t=10 days) are also illustrated.
2018-12-21T00:33:14.238Z
2017-01-25T00:00:00.000
{ "year": 2017, "sha1": "4681cadc14a39972945b1f12c04e78e66cc434d6", "oa_license": "CCBYNC", "oa_url": "https://ejournals.epublishing.ekt.gr/index.php/geosociety/article/download/11374/11495", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4681cadc14a39972945b1f12c04e78e66cc434d6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
258302025
pes2o/s2orc
v3-fos-license
Research advances in imaging markers for predicting hematoma expansion in intracerebral hemorrhage: a narrative review Introduction Stroke is a major global health concern and is ranked as the second leading cause of death worldwide, with the third highest incidence of disability. Intracerebral hemorrhage (ICH) is a devastating form of stroke that is responsible for a significant proportion of stroke-related morbidity and mortality worldwide. Hematoma expansion (HE), which occurs in up to one-third of ICH patients, is a strong predictor of poor prognosis and can be potentially preventable if high-risk patients are identified early. In this review, we provide a comprehensive summary of previous research in this area and highlight the potential use of imaging markers for future research studies. Recent advances Imaging markers have been developed in recent years to aid in the early detection of HE and guide clinical decision-making. These markers have been found to be effective in predicting HE in ICH patients and include specific manifestations on Computed Tomography (CT) and CT Angiography (CTA), such as the spot sign, leakage sign, spot-tail sign, island sign, satellite sign, iodine sign, blend sign, swirl sign, black hole sign, and hypodensities. The use of imaging markers holds great promise for improving the management and outcomes of ICH patients. Conclusion The management of ICH presents a significant challenge, and identifying high-risk patients for HE is crucial to improving outcomes. The use of imaging markers for HE prediction can aid in the rapid identification of such patients and may serve as potential targets for anti-HE therapies in the acute phase of ICH. Therefore, further research is needed to establish the reliability and validity of these markers in identifying high-risk patients and guiding appropriate treatment decisions. Introduction Stroke is a major global health concern and is ranked as the second leading cause of death worldwide, with the third highest incidence of disability (1). Intracerebral hemorrhage (ICH) is a type of stroke that occurs when a blood vessel ruptures within the brain, leading to bleeding and damage. It is a devastating condition, and unfortunately, the number of cases continues to rise each year, with an estimated 3.41 million new cases annually (1,2). Previous studies have reported mortality rates of ICH was 30-50% at 30 days, with nearly 50% of patients dying within 2 weeks of symptom onset (3). The clinical outcome of ICH patients is significantly affected by the initial hematoma volume and location, with hematoma expansion (HE) being present in 33% of cases and serving as an independent predictor of poor clinical prognosis and secondary neurological deterioration (4). Although no unified diagnostic standard currently exists, Computed Tomography (CT) and CT Angiography (CTA) examination are commonly used. Therefore, investigating the baseline CT scan and CTA images of ICH patients is critical for efficient clinical management. In recent years, many studies have identified specific CTA, CT and magnetic resonance imaging (MRI) manifestations that are associated with HE, including the spot sign (SpS) (5), leakage sign (LS) (6), spot-tail sign (STS) (7), iodine sign (IoS) (8), island sign (IS) (9), satellite sign (SaS) (10), blend sign (BS) (11), swirl sign (SwS) (12), black hole sign (BHS) (13), hypodensities (14), fluid-blood level (FBL) (15), subarachnoid extension (SAHE) (16), and MRI SpS (17). These imaging markers provide a more powerful approach for identifying patients at high risk of HE. This review aims to explore the potential correlation between specific hematoma manifestations on CTA and CT and early HE, with the goal of timely and effectively intervening in ICH patients with HE and reducing mortality rates while improving outcomes. The definition of HE In most studies, HE is defined as an increase in volume of the hematoma greater than 12.5 ml or more than 33% compared to the initial CT scan in follow-up CT scans (2). However, for studies involving CTA contrast agent extravasation, HE is defined as a proportional increase of 33% or an absolute increase in hematoma volume of more than 6 ml (5,18,19). Despite the frequent use of the definition of HE as >33% or >12.5 ml in recent studies, a consensus on the definition of clinically significant HE has not yet been reached. It is essential to establish an optimal definition of HE, both mathematically and clinically, to reduce heterogeneity among studies and to better stratify high-risk patients with ICH. Baseline CT scans are typically performed within 6 h of symptom onset, and follow-up CT scans are performed within 24 h after the baseline scan. Methods for calculating hematoma volume include the ABC/2 (Coniglobus formula), area measurement, and three-dimensional drawing. Semiautomatic measurement technology is more accurate than the ABC/2 formula, particularly for irregularly shaped hematomas (20). However, the Coniglobus formula has a maximum error of up to 20% in the calculation of irregular hematomas, which is extremely unfavorable for evaluating the patient's condition with ICH. In recent years, a few studies have utilized medical software to calculate the initial hematoma volume (21,22). For example, the 3D Slicer software (Version 4.8.0, Harvard University, NY) has a very small error in the calculation of hematoma volume and can detect inapparent HE. Therefore, the use of 3D Slicer for calculating hematoma volume is more accurate and can improve the assessment of patients' condition. Etiology The main factors that cause HE in patients with ICH include primary hypertension & cerebral amyloid angiopathy (CAA), diabetes, abnormal coagulation and genetic variations (Figure 1). Primary hypertension and CAA Both of them have the potential to cause blood vessel wall thickening, arteriosclerosis, and fragile blood vessels, increasing the risk of HE (23). Early studies showed that anti-hypertensive treatment could efficiently prevent HE, but subsequent studies found that there was no significant correlation between anti-hypertensive treatment and limiting HE and reducing mortality (24). The reason may be that before admission, the HE had already happened or HE may occur a few minutes after the onset of ICH. So anti-hypertensive treatment after admission did not improve the patients' condition. Diabetes Liu et al. (25) found that early HE was associated with high osmotic pressure caused by elevated blood sugar. Elevated blood glucose levels can increase blood-brain barrier (BBB) permeability and lead to its disruption, making it easier for blood components to enter the brain tissue and cause further damage. The disruption of the BBB may also facilitate the infiltration of inflammatory cells into the brain, which can exacerbate the inflammatory response and contribute to HE. Besides, diabetes can impair coagulation function and alter the activity of several coagulation factors, resulting in a procoagulant state that favors the formation of blood clots. When a vessel ruptures in the brain, the formation of blood clots can contribute to the formation of the initial hematoma. Additionally, the procoagulant state may also contribute to the enlargement of the hematoma by promoting the formation of microthrombi that occlude small vessels and lead to ischemic damage in the surrounding tissue. Finally, diabetes can also lead to the activation of various inflammatory pathways, including the nuclear factor-kappa B pathway, which can promote the production of pro-inflammatory cytokines and increase oxidative stress in the brain tissue. These effects can contribute to the progression of the initial injury and exacerbate HE. Abnormal coagulation Abnormal coagulation could significantly increase the risk of HE. Patients with ICH were often accompanied by abnormal blood coagulation, and anticoagulant drugs were used, especially antiplatelet aggregation drugs (26). Restoring coagulation function could strongly reduce the risk of HE and improve the outcome. Genetic variations Studies have shown that genetic variations in certain genes are associated with an increased risk of HE and poor clinical outcomes in patients with ICH. Several single-nucleotide polymorphisms in the apolipoprotein E (APOE) gene have been identified as risk factors for HE and poor outcomes in ICH patients (27). APOE plays a key role in lipid metabolism and transport in the brain. In ICH, the APOE gene may also influence the metabolism and clearance of blood in the brain, leading to an increased risk of HE. Other genes that have been linked to HE in ICH patients include the factor XIII subunit A gene, which is involved in blood coagulation and clot stability, and the matrix metalloproteinase gene family, which is involved in the breakdown of the extracellular matrix and tissue remodeling (28,29). Variations in these genes may lead to impaired coagulation and clot stability, or increased breakdown of the extracellular matrix, which can promote HE and poor outcomes in ICH. Furthermore, genetic factors may also interact with other clinical and environmental factors to increase the risk of HE. For example, a study found that the combination of certain genetic variants and high blood pressure was associated with an increased risk of HE in ICH patients (30). Another study found that the interaction between genetic variants and smoking was associated with an increased risk of HE and poor outcomes in ICH patients (31). Pathophysiology According to pathological evidence, the enlargement of hematoma may be attributed to secondary mechanical shearing of the adjacent vessels at the initial site of bleeding. In the 1970s, Fisher et al. (32) proposed the "avalanche" model to explain how primary ICH caused secondary mechanical damage to adjacent vessels. Subsequent studies have revealed that the volume of hematoma in patients with ICH is bimodal, and whether it is a micro-hematoma or a large hematoma is consistent with the "avalanche" injury process (33). According to the "avalanche" model, the enlarged part of the hematoma corresponds to the site of the "SpS" which indicates active bleeding in CTA. Multiple "SpSs" in the same hematoma suggest that multiple blood vessels are bleeding simultaneously (34). Some researchers believed that a single vessel rupture and persistent bleeding caused HE in patients with ICH (35), but so far, there is no direct pathophysiological evidence to support this theory. However, recent studies have shown that vascular abnormalities, including microaneurysms, exist in the brain tissue surrounding the hematoma and are likely to play a role in the pathogenesis of HE (36,37). Moreover, studies have also suggested that coagulation abnormalities and inflammation may contribute to the development of HE (38,39). Furthermore, the activation of the inflammatory cascade, generation of coagulation end-products, and hemoglobin degradation products lead to the initiation of a secondary injury cascade, which proceeds via diverse molecular pathways, The etiology of hematoma expansion in intracerebral hemorrhage (by Figdraw). Frontiers in Neurology 04 frontiersin.org including but not limited to mitochondrial failure, iron-mediated oxidative stress, and sodium accumulation. Ultimately, these mechanisms result in the generation of proinflammatory mediators that trigger the breakdown of the blood-brain barrier, cerebral edema, and neuronal apoptosis (40). Therefore, it is imperative to conduct further preclinical and clinical research to gain deeper insights into the pathophysiology of both HE and ICH and identify potential therapeutic targets to prevent or minimize its development. Outcome Recent studies have further confirmed the strong association between HE and secondary neurological deterioration, poor clinical outcomes, and mortality in patients with ICH (4,41). A dose-response relationship has been observed, indicating that for every 10% increase in hematoma volume, the case fatality rate increases by 5%, and for every 1 ml increase in hematoma volume, the likelihood of ICH patients transition from independent living to being unable to care for themselves increases by 7% based on modified Rankin scale (mRS) evaluations (42). Moreover, the degree of HE expansion has been consistently related to functional prognosis and mortality, regardless of the definition of HE. These findings suggest that identifying patients at high risk of HE and implementing targeted interventions to prevent HE and its sequelae may lead to improved clinical outcomes in patients with ICH. Moreover, the impact of HE on patient outcomes is not limited to immediate mortality and disability. Studies have shown that HE is also associated with long-term functional and cognitive impairment, reduced quality of life, and increased risk of recurrent ICH (43,44). The underlying mechanisms of these long-term consequences are not fully understood but may be related to ongoing neuroinflammation and secondary injury to surrounding brain tissue. To mitigate the impact of HE, early identification of patients at high risk of HE is crucial. In addition to hematoma volume, other imaging markers such as BS, BHS, and IS have also been proposed as predictors of HE (45,46). However, these markers require specialized training and may not be widely available. Recently, machine learning algorithms have been applied to automatically identify these markers and predict HE, showing promising results (47,48). Overall, while HE is a common and serious complication in patients with ICH, the development of reliable and accessible imaging markers and targeted interventions may help to improve patient outcomes and reduce the burden of this devastating disease. Characteristic imaging markers of HE Based on the different imaging technique, we narrate the imaging markers on CTA, CT, and MRI, respectively. A summary of the imaging markers associated with HE in ICH is presented in Table 1. 6.1. Imaging markers on CTA 6.1. Spot sign The SpS presented on the CTA referred to the "enhanced focus in the hematoma" in the original image (5) (Figure 2A). The biological basis of the SpS is not yet fully understood, but recent studies suggest that it may be related to increased permeability of cerebral vessels, indicating a higher risk of HE. According to precious study, the permeability of CT perfusion imaging (CTP) could identify whether there is a SpS. The permeability refers to the rate of contrast agent overflowing from the cerebral vascular, the higher the permeability, the greater the possibility of HE (50). At present, early clinical manifestations, coagulation, APOEε2 alleles, Glasgow Coma Scale at onset, mean arterial pressure > 120 mmHg, and intraventricular hemorrhage (IVH) were associated with the appearance of the SpS And studies have shown that the grading system of SpS could not only predict the early incidence of HE in ICH patients, but also accurate classification of HE, in-hospital mortality and clinical outcome (53). This will help to further screen high-risk ICH patients. Recent research has focused on refining the use of the SpS for clinical decision-making in ICH patients. For example, a recent study found that incorporating the SpS and other clinical factors into a predictive model could accurately identify patients at high risk of HE and guide treatment decisions (54). Other studies have explored the use of machine learning algorithms to automatically detect and quantify the spot sign, which may improve the efficiency and accuracy of diagnosis (55, 56). Overall, the SpS is a promising imaging marker for predicting the risk of HE in ICH patients, and its accurate detection and quantification may help to guide clinical decision-making and improve patient outcomes. However, further research is needed to fully understand the underlying biological mechanisms of the SpS and to refine its use in clinical practice. Leakage sign In 2016, Orito et al. (6) proposed the concept of LS based on previous studies on SpS and established a method to determine the positive LS based on the comparison of CTA phase and delayed CTA phase images ( Figure 2B). Firstly, each evaluator was asked to set a region of interest (ROI) with a 1 cm margin on the delayed CTA phase image, which was considered the area with the highest HU change between the CTA phase and delayed CTA phase. Secondly, the same ROI was then placed on the CTA image in the same anatomical area. The HU values in the ROI of the CTA phase and delayed CTA phase images are calculated, and an HU increase >10% was considered the positive LS, indicating subtle contrast agent extravasation. Their study found that the LS had a higher SEN (93.30%) and SPE (88.80%) for predicting HE compared to the SpS. Furthermore, patients with positive LS had a significantly worse prognosis than those with negative LS. In fact, the LS may represent a dynamic change of the hematoma and be a more sensitive marker for predicting HE in ICH patients. However, further research is needed to validate these results and establish the clinical utility of the LS. Another potential issue is that this method may result in higher radiation exposure. However, if HE can be diagnosed, the clinical benefits outweigh the additional radiation exposure risk. The presence or absence of the LS does not significantly affect surgical indications. Using the LS to predict frontiersin.org Spot-tail sign The STS, proposed by Sorimachi et al. (7) in 2013, combines the SpS with the presence of an intrahematoma striate artery on CTA coronal image ( Figure 2C), and has shown potential as a more accurate predictor of HE and acute neurological deterioration compared to the SpS alone. Recent studies have further supported the usefulness of the STS. For example, a study by Phan et al. (57) found that the STS was associated with a higher risk of early neurological deterioration, and was an independent predictor of HE, while the SpS alone was not significant in predicting these outcomes. Another study by Li et al. (58) found that the presence of the STS was associated with larger hematoma volume, more frequent IVH, and worse clinical outcome. One possible explanation for the association between the STS and HE is that the striate artery represents the site of active bleeding, and the sustained blood supply through the striatum to the bleeding site promotes HE. This hypothesis is supported by angiographic images showing contrast agent extravasation from the striate artery. In conclusion, if the hypothesis is valid (i.e., the striate artery is the location of active bleeding), the STS may be a more sensitive predictor of HE and acute neurological deterioration when compared to the SpS. However, more research is needed to confirm this hypothesis and further clarify the mechanism of HE. Iodine sign Gemstone spectral imaging (GSI) is a promising scanning mode that enables direct separation of iodine from the blood and subsequent reflection of iodine concentration (IC) by monochromatic imaging. Consequently, a novel method called the IoS has been introduced ( Figure 3C), which allows for direct reflection of leaking iodinated contrast and prediction of HE. A positive IoS was defined as: (1) ≥1 enhanced foci on the iodine-based decomposition image within the hematoma of any size and morphology, assessed by visual inspection (conducted by nonradiologists); (2) an internal focus IC >7.82100 μg/ ml, measured by reviewers using a region of interest that covered most of the focus area (magnified from ×3 to ×5); (3) discontinuity from adjacent normal or abnormal vasculature. A study conducted by Fu et al. (8) demonstrated that the IoS was a reliable and sensitive marker for predicting HE and poor functional outcomes in ICH patients. Another comparative study of BHS, SaS, and IoS in predict HE in patients with spontaneous ICH demonstrated that the presence of GSI-based IoS had a better predictive value for HE with higher sensitivity and accuracy (60). Despite the usefulness of spectral imaging, its availability and application may not be feasible in various medical institutions, thereby potentially limiting the prevalence of the IoS. Further large-sample and multi-center studies were still urgently needed to identify whether the IoS is a reliable and sensitive marker for predicting HE and poor functional outcomes or not. 9) proposed the IS as an independent predictor of HE and poor functional outcome in patients with ICH ( Figure 3A). The sign is defined as the presence of three or more small hematomas scattered and separated from the main hematoma or four or more small hematomas, some of which are separated from the main hematoma. The island hematoma is round or oval and separate from the main hematoma, and the small hematoma associated with the main hematoma should be vesicular or budlike but not lobulated. The IS represents extreme margin irregularities and is a refinement of the shape irregularity scale. Huang et al. (21) further suggested that the IS is a strong predictor of HE and is useful for scoring the prediction of HE in ICH. In comparison, Zheng et al. (61) showed that although the accuracy of the IS for predicting HE is lower than the SpS, it can be an alternative predictor if CTA cannot be performed. Moreover, Zhang et al. (62,63) validated Li et al. 's findings and showed that the IS predicts both early HE and long-term poor clinical prognosis, and admission serum glucose is associated with HE and IS. Huang et al. (22) also observed that the incidence of the IS was higher in patients with larger hematomas, implying that "worse hematomas get worse. " However, further research is required to explore the pathogenesis of the IS in larger hematomas. Nonetheless, the IS is a useful imaging Frontiers in Neurology 07 frontiersin.org marker that can aid in predicting HE and poor clinical outcomes in patients with ICH. Satellite sign In 2017, Shimoda et al. (10) proposed the definition of the SaS ( Figure 3B), which is characterized by a small hematoma completely separated from the main hematoma, with a diameter smaller than 10 mm and a distance between the small and main hematoma ranging from 1 to 20 mm. A study of 257 patients with sICH showed that the presence of at least one SaS in CT images was an independent and serious risk factor for poor prognosis in patients with sICH (10). However, the relationship between SaSs and HE is still unclear. The SaS was found to be strongly associated with increased blood pressure, decreased activated partial thromboplastin time, large hematoma volume, and IVH at admission, which may help predict the prognosis of patients with sICH. The authors suggested that metabolic changes occurring around the hematoma are associated with cytotoxic effects, which may lead to hemorrhagic transformation or reperfusion injury, ultimately resulting in the destruction of the capillary blood-brain barrier and the formation of a SaS as a lesion around the hematoma. However, further research is needed to determine the mechanism of hemorrhage around the hematoma. It is important to note that some SaSs may actually be part of an irregular hematoma, which is thought to be the result of multiple arteriolar hemorrhage (49, 64-66). Barras et al. (64) identified the SaS as a part of the cut end of a lobulated irregular hemorrhage. Despite this, the SaS can still be used as a predictor for patient outcomes due to its clear definition and easy identification. A comparative study of SaS and SpS in 153 patients with sICH found that the SaS is an independent predictor for HE, with a SEN and SPE of 59.46 and 68.97%, respectively (67). Although the SpS has higher predictive accuracy, the SaS is an acceptable predictor for HE when CTA is unavailable. The incidence of HE in patients with supratentorial hemorrhage is higher in those with positive SaSs compared to irregular hematoma. Therefore, the SaS is a simple image marker that has proven to be of acceptable predictive value for HE. However, further research is needed to verify the underlying mechanisms of the SaS. Density-related imaging markers 6.2.2.1. Blend sign In some institutions, the use of CTA examination to assess ICH patients may be limited. As a result, researchers have explored other imaging markers that can predict HE on CT scans. In 2015, Li et al. (11) proposed the BS as a potential marker ( Figure 3D). The BS is characterized by a relatively low-density region with an adjacent highdensity region within the hematoma, with a well-defined margin between these two regions. The difference in Hounsfield units between these two regions should be at least 18 Hu, and the relatively low-density region should not be completely surrounded by the highdensity region. In a study of 172 ICH patients, the BS was detected in 29 (16.9%) patients on the baseline CT scan. The SEN, SPE, PPV, and NPV of the Imaging markers on CT and MRI. (A) Four or more small hematomas, some of which may be separate from the main hematoma, may indicate persistent bleeding from small vessels in the surrounding area, which could lead to HE. (B) A small hematoma located around the main hematoma, referred to as "satellites" (10); If the "satellite" develops further, it may become IS. So the SaS may represent a dynamic change of IS. (C) A relatively low-density region with an adjacent high-density region within the hematoma and a well-defined margin between these two regions. (D) Low-density region within a high-density hematoma (59). (E) Low-density area within the hematoma that is completely surrounded by adjacent high-density hematoma (13). (F) Hypoattenuation is associated with the hyperattenuated hematoma with a well-defined density difference between the two attenuation regions (14). (G) A horizontal interface between hypodense bloody serum and hyperdense fluid (15). (H) SAHE may represent the fragility of vessels as well as the active bleeding in the hematoma surrounding the vessels, lending credence to the "avalanche" hypothesis (16). (I) One spot was found on contrastenhanced T1-weighted sequence (17). Frontiers in Neurology 08 frontiersin.org BS for predicting HE were 39.30, 95.50, 82.70, and 74.10%, respectively. The specificity of the BS was found to be higher than that of the SpS. The baseline hematoma volume of patients with a positive BS was larger than that of patients with a negative blend sign. Additionally, the hematoma was more likely to expand in patients with a positive BS, suggesting that the BS could be used as an independent predictor of HE. The BS reflects the attenuation of hematoma density on CT scans in patients with different stages of ICH (68). The density of the hematoma is affected by its composition, with hemoglobin being an important factor in determining its appearance on CT scans. As blood coagulates, the hematoma shows high density on CT scans. On the other hand, when there is active bleeding, the hematoma tends to be lower in density than blood clot condensate. The appearance of the BS is due to the presence of mixed blood at different bleeding times, and hematoma re-bleeding can further lead to HE (11). Swirl sign The SwS is an imaging finding observed in intracranial hyperattenuated hematomas ( Figure 3E), which refers to region(s) of hypoattenuation or isoattenuation within the hyperattenuated ICH that can vary in shape and be rounded, streak-like or irregular (12,68). Its definition has evolved over time, with Selariu et al. 50, 71.30, 47.00, and 71.00%, respectively, and multivariate logistic regression showed that the presence of SwS on admission CT does not independently predict HE in patients with ICH. Therefore, further research is necessary to determine the true prognostic value of the SwS in HE. Black hole sign In recent years, researchers have identified a phenomenon called the BHS on CT scans of patients with ICH ( Figure 3F). The BHS is characterized by a low-density area within the hematoma that is completely surrounded by adjacent high-density hematoma. The sign has a clear boundary, is not connected to adjacent brain tissues, and the CT values of the two density regions within the hematoma differ by at least 28 HU. Studies have shown that the BHS is a good predictor of early HE. In a study of 206 ICH patients, 30 (14.6%) were found to have the BHS on their baseline CT scans. The SEN, SPE, PPV, and NPV of predicting early HE were 31.90, 94.10, 73.30, and 73.20%, respectively (13). In a comparative study of the BHS and another sign called the BS both were found to be good predictors of HE, with the BS showing a slightly higher level of accuracy (72). In another investigation of 129 ICH patients, both the SpS and BHS appeared to have good predictive value for HE, but the SpS seemed to be a better predictor (73). Furthermore, the presence of the BHS on initial CT scans independently predicted poor clinical outcomes at 90 days, according to a study of 225 patients (74). The authors found that patients with the BHS were significantly more likely to have a poor clinical outcome (defined as mRS ≥ 4) than those without (84.4% vs. 32.1%). Hematoma heterogeneity has also been shown to be associated with HE (52). However, assessing heterogeneity is subjective, and there is no established and reliable imaging standard for its assessment. The appearance of the BHS suggests that there are bleeding episodes at different periods within the heterogeneous hematoma and could be a useful predictor of HE in patients with ICH. Hypodensities Studies have demonstrated a correlation between hypodensities and HE following ICH (14, 75) ( Figure 3G). Moreover, unsatisfactory outcomes at 90 days have been linked to hypodensities. Factors such as a larger hematoma size, prior anticoagulation use, the SpS on CTA, and a shorter time to CT have been associated with hypodensities (76). In one study, the optimal detection time for hypodensities was 1.5-3 h with a cut-off point of 114.5 min. Therefore, vigilance is advised for clinicians when hypodensities are detected between 1.5 and 3 h after ICH onset to prevent secondary neurological deterioration (77). Fluid-blood level In patients with ICH, baseline CT scans have occasionally revealed FBL (15,(78)(79)(80)(81) (Figure 3H), which is defined as a horizontal interface between hypodense bloody serum and hyperdense fluid that has settled dorsally and is visible on CT scans (15,79). The presence of FBL on Non-Contrast Computed Tomography (NCCT) scans has been linked to anti-coagulation use, a lobar location, and an increased risk of HE (79,80). As a result of hematoma liquefaction, hemorrhage extravasation into pre-existing cystic cavities leads to FBL (15,81). A recent study has also suggested that FBLs may serve as a vital marker of HE in patients with ICH associated with CAA (15). The density of the hematoma on NCCT may potentially suggest distinct phases of bleeding and may be connected with clinical progression following symptom onset. NCCT attenuation is timeindependent in ICH, and hematoma density fluctuation is related to clot development and the sedimentation of cellular components in the plasma. The content of hemoglobin primarily determines density on NCCT, with protein-rich plasma appearing hypodense on NCCT in the initial phase of ICH relative to surrounding tissue (82). Clot retraction causes a relative hyperattenuation on NCCT, leading to heterogeneity in hematoma density, which may serve as a valuable predictor of the risk of HE or a poor outcome (83, 84). Subarachnoid extension SAHE, a new imaging marker for predicting HE and poor functional outcomes in ICH patients, was recently proposed (16) ( Figure 3I). The researchers discovered that SAHE could predict HE in individuals with lobar ICH. SAHE was observed to occur in 27.8% of the development cohort and 24.5% of the replication cohort. A multivariate study demonstrated that SAHE independently predicted the probability of HE in patients with lobar ICH after controlling for confounding variables. SAHE may show the existence of weak vessels as well as active bleeding in the hematoma surrounding the vessels, lending credence to the "avalanche" hypothesis of HE (15). Furthermore, earlier research has indicated that the presence of cortical superficial siderosis on MRI was substantially linked with a Frontiers in Neurology 09 frontiersin.org higher volume in individuals with lobar ICH, supporting this assumption indirectly (85-87). Spot sign The identification of SpS on MRI was first proposed by Muran et al. (88) in 1998. At that time, the concept of SpS did not exist, and any high-intensity signals on T1-weighted post-contrast images were thought to be due to extravasation of contrast medium. In a study of 108 patients, extravasation was observed in 39 patient. Extravasation on MRI was found to be closely correlated with HE, indicating ongoing bleeding. Aviv et al. (89) later developed an animal model of contrast extravasation (SpS) in acute ICH based on MRI, but no significant correlation was found between SpS and HE. Since there was no corresponding MRI marker for SpS at the time, Katharina et al. (1) spot-like or serpiginous high signal intensity >1.5 mm in at least one dimension, located within the margin of the hematoma and without connection to an outside vessel; and (2) no hyperintensity at the corresponding location on non-enhanced T1-weighted time-offlight magnetic resonance angiography (MRA). In a study of 147 patients, the presence of SpS on MRI was found to be an independent biomarker of HE, and the presence of ≥2 spots was independently associated with a poor 3-month outcome. Conversely, the lack of SpS was highly predictive of a favorable evolution. Due to a corresponding MRI marker is lacking to date, further investigation of imaging markers on MRI is urgent for identifying ICH patients with high risk of HE. Minimal computed tomography attenuation value Chu et al. (91) discovered that the MCTAV is an independent predictor of HE and poor functional outcomes. In their study, the MCTAV was measured by manually selecting a region of interest before the software automatically calculated the CT minimal attenuation value. Their results revealed high SEN (64%) and SPE (92%) for identifying patients at risk of HE. Application of imaging markers and suggestions for clinicians Using imaging markers and patient clinical information to develop scoring systems or nomogram models to identify high-risk HE patients is a clinical method for risk stratification and further predicting functional outcome. Different medical institutions may have varying medical conditions, so it is necessary to choose appropriate imaging markers to predict the risk of HE based on reality. In 2018, Morotti et al. (92) proposed the BAT score, which includes an easy-to-use 5-point prediction score, including positive BS (1 point), any hypodensity (2 points), and time from onset to NCCT <2.5 h (2 points). Their findings showed that the BAT score can identify subjects at high risk of HE with good specificity and accuracy. This tool requires just a baseline NCCT scan and may help clinicians in poor medical institutions distinguish high-risk HE patients. In The TRAIGE trial was conducted across 10 stroke centers in China as a randomized, placebo-controlled study aimed at assessing the efficacy of tranexamic acid in preventing acute intracerebral haemorrhage growth (95). Eligible patients were identified using imaging markers such as SpS, BHS, or BS on CT or CTA, and had to be treatable within 8 h of symptom onset. Participants were randomly assigned to receive either tranexamic acid or a placebo in a 1:1 ratio. However, the study results revealed that tranexamic acid did not significantly prevent intracerebral haemorrhage growth among patients at risk of HE and treated within 8 h of stroke onset. As a result, larger studies are necessary to gain a better understanding of the effectiveness of tranexamic acid. Notably, this trial underscores the potential utility of imaging markers in identifying eligible patients for study participation. Based on the studies mentioned above, we recommend that clinicians use comprehensive scoring systems to identify high-risk HE patients. When only NCCT is available, we suggest using the BAT score. Another option is the 7-point prediction score proposed by Huang et al. (21). These scoring systems can assist clinicians in poor medical institutions to quickly identify high-risk HE patients and conduct appropriate anti-expansion therapy. When CTA is available, we suggest using the PREDICT A/Bscores identify ICH patients with high-risk HE. We summarized the different prediction scores associated with HE and clinical outcomes in ICH in Table 2. In summary, the approach of stratifying high-risk HE patients based on imaging markers and patient clinical information is interesting. Comprehensive scoring systems can assist clinicians in different medical institutions to quickly identify high-risk HE patients and conduct appropriate anti-expansion therapy. This is a crucial point to improve poor outcomes, disability, and mortality rates. Challenges and areas of focus for the future In accordance with baseline hematoma size and location, HE is regarded an independent predictor of prognosis and a prospective target for acute-phase therapy of ICH. In clinical practice, regular monitoring of imaging indicators can identify patients who require anti-expansion therapy. Nevertheless, it is unclear if anti-expansion therapy improves clinical functional result or survival, which is a critical subject for further research. The Antihypertensive Therapy of Acute Cerebral Hemorrhage (ATACH-2) study found that aggressive blood pressure management did not reduce mortality or disability (96). Yet, the hemostatic medication Factor VII's capacity to diminish HE was most potent during the first 2.5 h, as demonstrated in the Factor VII for Acute Hemorrhagic Stroke Trial (FAST) experiment (97,98). A subsequent analysis of the ATACH-2 study found that reducing blood pressure ultra-early lowers HE and improves outcomes in people with ICH (99). Recent findings from the Minimally Invasive Surgery Plus Alteplase for Intracerebral Hemorrhage Evacuation (MISTIE) III study showed that a minimally invasive technique was safe for patients with ICH. Nevertheless, it demonstrated no effect in terms of the primary outcome in a subset of individuals. Patients with low risk of HE may be better suited for MIS methods when they become available because to their decreased risk of postoperative rebleeding (100). The BS and BHS were linked to postoperative rebleeding in patients with ICH after minimally invasive surgery, according to retrospective investigations (101,102). Despite this, individuals with consistently formed hematomas had satisfactory functional results following surgery (103). Newly suggested recommendations for identifying, reporting, and interpreting these radiological indicators may give more proof of imaging markers' predictive efficacy (104). As a result, future initiatives should aim to improve the standardization of the everexpanding vocabulary for HE imaging indicators and determine if they should be utilized to select patients for ICH clinical trials. Future imaging research in ICH should focus on developing user-friendly systems that include imaging markers and parameters. When artificial intelligence is integrated into the clinical workflow as a tool to aid clinicians, more accurate radiological assessments can be performed (105-108). Conclusion The management of ICH presents a significant challenge, and identifying high-risk patients for HE is crucial to improving outcomes. The use of imaging markers for HE prediction can aid in the rapid identification of such patients and may serve as potential targets for anti-HE therapies in the acute phase of ICH. Therefore, further research is needed to establish the reliability and validity of these markers in identifying high-risk patients and guiding appropriate treatment decisions. Author contributions Y-WH and H-LH developed the initial idea for this study, contributed to the original draft, and contributed equally and are co-first authors. Z-PL and X-SY searches the relevant references and was responsible for the revision of the draft. Y-WH created the figure of etiology. X-SY provided the funding. All authors contributed to the article and approved the submitted version. Funding This study was funded by the Project of Mianyang Central Hospital (2021YJ006).
2023-04-25T13:12:45.651Z
2023-04-25T00:00:00.000
{ "year": 2023, "sha1": "ff55314d5faddeec019f745759469a5519f4a447", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ff55314d5faddeec019f745759469a5519f4a447", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
104386100
pes2o/s2orc
v3-fos-license
HEAVY METALS CONTAMINATIONASSESSMENT FOR SOME IMPORTED AND LOCAL VEGETABLES The objective of this study, to was measure concentrations of some heavy metals in various imported and locally produced vegetable crops, including root crops (Turnips ''Brassica rapa'' and Carrots ''Daucus carrota''), stem crops (Potatoes ''Solanum tuberosum'' and Onion ''Allium sativa''), leaves (Lettuce ''Lactuca sativa''), and fruits (Tomatoes ''Lycopersicon esculentum'').These crops were collected from Baghdad central whole sales. X-ray fluorescence spectrometry technique was applied to measure heavy metal concentrations. In the imported vegetables, heavy metal mean concentrations were arranged in the following increasing order: Fe>Zn> Cu>Ni>Co>Cd>Cr>Pb, whereas higher levels of Cr, Fe, Cu, Zn, Cd and Pb (1.2075, 165.995, 37.2275, 43.775, 6.0375, 1.48)mgkg -1 , respectively, were found within the locally produced vegetables. High level of Co 3.09625 mgkg -1 was also detected in onion, while, increased concentrations of Ni 7.8675mgkg -1 were found in lettuce collected from local market. Overall significant differences in the heavy metals concentration between imported and locally produced vegetables were observed. The daily intake of four main heavy metals (Cd, Cu, Zn and Pb) had been estimated which revealed highly consumption of Cd (310 and 372 μg per day) for imported and locally vegetables respectively. This study suggests raising the concern of the society and Iraqi government to this problem and to take in consideration its impact on general health and environment. INTRODUCTION Vegetables provide a fast, low cost and an adequate source of vitamins, minerals, and fibers (2).Vegetables contain fundamental nutrients such as proteins, calcium, which is part of the essential life-supporting materials for humans and animals (34).During the digestion process, vegetables play an important role in neutralizing agents for acidic substances that arise during this process (35).Moreover, the most important benefit of vegetables is their anti-oxidative effects against different toxicants (11).Various plant species have their nutritive requirements, including minor and macro elements which support the budding and growth rate (14).Several traces of heavy metals are regarded as micronutrients which plants and higher animals demand in fewer quantities, so health problems might be caused when these metals concentrations are elevated more than acceptable limits (22).A wide range of minerals are utilized by the plants through their roots from the contaminated soil and are transported to the shoot apex and accumulate in different edible plant parts (25).Another pathway of entrance is already identified via the atmospheric deposition on the leaves surfaces, then absorbed by the plant issues (16).This explains why these contaminants are reaching human food chain, leading to many health issues within populations exposed to them.Heavy metals are classified depending on their chemical properties which include atomic number, atomic weight, and toxicity (27).Their harmful effects are due to their non-biodegradability, long biological half-life and potential accumulation in the body organs (29).Heavy metals emission sources in the environment are both; natural, such as soil, rock erosion, and dust or anthropogenic, such as industrial activities, mining, rapid organization, transportation and pesticides (38).Extended and exacerbated application of organic manures and both organic, inorganic fertilizers, can lead to high levels of heavy metals as well as other ions (2).In addition this random application of fertilizer not dissolved soil fertility problems and not increased its efficiency to provide crops with nutrients unless its insure a good nutrient balance in order to produce an ideal yield (3). Recent studies revealed that wastewater effluents are loaded with different types of heavy metals, including cadmium (Cd), capper (Cu), zinc (Zn) and nickel (Ni), which their effect is going beyond the physiological necessity of plant but instead tending to be magnified in next steps of food chain (30).This is already proved by their binding ability to different protein molecules, preventing the replication of DNA and affecting cells division (8).It has become obviously, that prolong consumption of heavy metals can lead to numerous defects in biochemical processes of nervous, vascular systems as well as bone diseases (15).Some metals such as Cu and Zn, when exceeding allowable limits, are acting as oxidative stressing factors through oxidative -reduction reaction (redox), thus altering the productionutilization of energy in living systems (31).Lead (Pb) has the same oxidative effect and can cause mental retardation in children (7).In contrast, chromium (Cr), especially (CrVI) form, is carcinogenic.Whereas glomerular damage and metabolic alteration are caused by cadmium (Cd) long term exposure (24).Simultaneously, cobalt (Co), nickel (Ni), iron (Fe), have received more attention because of their adverse effect on health (18).Hence, these facts insure their association with many chronic diseases in humans (10).Heavy metals harmful influence and persistence in vegetables have been well studied and documented extensively in many countries around the world (5).This study aimed to measure the concentration of heavy metals in imported and local vegetables at four markets of Baghdad city. MATERIALS AND METHODS Four local sites were chosen for samples collection during 2016 (Al-Taji, Al-Shuala, Al-Zaafaraniya and Al-Rahased markets), which represent the northern, western, eastern and southern sites of Baghdad city, respectively.All those markets serve as the whole suppliers and sales sites for vegetables to other internal or minor markets in the city.For local produced crops, selected vegetable were collected in two different times.These samples were classified to root crops (Turnips ''Brassica rapa'' and Carrots ''Daucus carrota''), stem crops (Potatoes ''Solanum tuberosum'' and Onion ''Allium sativa''), leaves (Lettuce ''Lactuca sativa'') and fruits (Tomatoes ''Lycopersicon esculentum'').Samples were designed in such way to cover common edible parts of vegetables which repeatedly consumed by people at Baghdad city.Regarding imported vegetables, two batches were chosen within two time intervals during the year in which the study was carried on.This was done to insure more accuracy of the collected data.Samples were put in plastic bags and closed tightly to prevent any cross contamination, then brought to the laboratory for further analysis.The collected samples were washed thoroughly with tap water first and then with deionized water to remove dust particles.These samples were cut into small pieces using a clean knife.Different parts of roots, stems, leaves and fruits were air dried for two days in shade and then kept in hot oven not more than 70ºCto dry and left to cool at room temperature (25).Each sample was grinded in to a fine powder, using mortar at first and then a commercial blender.The powders were passed through a 212 µm mesh sieve and finally stored in air tight sealed screw cups appropriately labeled until for analysis (19).Three grams of the prepared vegetables powder were weighed and taken to analysis (4).Determination of heavy metals, such as (Cr, Fe, Co, Ni, Cu, Zn, Cd and Pb) levels was carried out using x-ray fluorescence spectrometry according to methods described previously(6) (9).The following equation was applied to calculate the daily intake of heavy metals within the vegetables tested in this study by Elbagermi et al. (11): This equation was mainly used to calculate the concentration of Pb, Cd, Cu and Zn as recommended by Joint FAO/WHO (17).Data was analyzed and displayed using GraphPad virgin 6 (v6) and significance was determined using multiple T-test analysis, depending on data normalcy. RESULTS AND DISCUSSION The mean concentrations of heavy metals for imported batches of collected vegetable samples are given in Table 1.The results demonstrated that the detected heavy metals are present in all samples and their concentrations in these samples are varying.Lead (Pb) ranged from 0.325 to 1.5575mgkg -1 in tomato and lettuce.It was observed that Fe had the maximal values in lettuce samples (411.625mgkg - ).While the highest concentration of Cu (13.65 mgkg -1 ) was observed in tomatoes.In the case of Zn, this element value was between 33.525 mgkg -1 in onion and reached to 24.88mgkg -1 in carrots.In the terms of its concentration, Ni was found to be variable as following: Lettuce>tomatoes>turnips>onion>carrot> potatoes.In table1, the average of Cd was ranging from (1.625-5.575)mgkg -1 .Interestingly, Co had similar concentration in all samples which was about (3.1) mgkg -1 .Cr was present in close values in almost all collected samples except with turnips which had the highest one (1.7475mgkg - ).From what is previously mentioned, results showed that the concentrations of heavy metals in vegetable samples were found according to their abundance in the following order: Fe>Zn> Cu>Ni>Co>Cd>Cr>Pb.Mean concentrations of heavy metals in different local vegetables are given in Table2.Approximately, the same variation in heavy metal concentrations for imported groups was observed in local types.Results showed that value of Fe in lettuce (leafy vegetables) is higher than other vegetables.Pb concentration in tomatoes 0.465 mgkg -1 .Whereas the highest value of Pb was observed in turnips (1.48mgkg -1 ).Simultaneously, Table 2 illustrated an interesting value of Cu (37.2275mgkg -1 ) and Zn (43.775mgkg -1 ) in turnip samples, when compared with other vegetables.The highest concentrations of Ni were noticed in lettuce (7.8675mgkg -1 ) followed by a 7.7775 mgkg -1 in turnips.However, Cd maintained its range 1.525mgkg - 1 in turnips and 6.0375mgkg -1 in onion.Interestingly, Cr and Co concentrations in local vegetables were similar to those recorded in imported types (1.03 and 3.1 mgkg -1 respectively).In spite of new values such as Cr (1.2075) mgkg -1 in lettuce, and for Co (2.8375) mgkg -1 in potatoes. Statistical analysis revealed that significant differences were noticed between local and imported vegetables collected from various sites.The obtained results were compared with recommended limits established by FAO/WHO and other international guide lines of heavy metals in food (edible parts of different vegetables) to assess levels suitable for human consumption (17).The most important factor is the intake by human.In contrast, there is no national legislation dealing with standard levels of heavy metals in vegetables, so that most of published studies depend on these guidelines to insure improvement of food safety (21).As mentioned before, the concentration of Fe in lettuce was high.Fe concentrations were significantly (p<0.01)higher as compared to other vegetables.Similarly, this was found by Santamaria et al. (28) that different parts of plant had varied concentrations of heavy metals.Moreover, they reported that leaves had higher levels of these contaminants.In present study, Fe recorded concentrations reached to (411.625) mgkg -1 (table, 1) which is considered much higher than those reported by Zahir and Mohi (36) when they observed that the Fe concentration in different analyzed vegetables ranged from 7.9-24.8mgkg -1 .The present results were in agreement with those found by Ali and Al-Qahtani ( 5) who reported Fe concentrations 31.9mgkg - and 543.5mgkg -1 for onion and parsley, respectively.The reason behind the Fe high concentrations in leaves is that leaves are considered as food making factories in plant; therefore Fe can be promoted and accumulated in them.This explains why carrot (root) contains concentration (8.64mgkg -1 Table, 1). Figure 1. Mean concentrations of Fe levels (mgkg -1 ) in imported and local vegetables ** (p <0.01) Results showed that some Cu values are within acceptable limits (10mgkg -1 ) as recommended by FAO/WHO (21).There was a significant variation (p<0.001) between imported and locally samples of tomato.1) which was similarly obtained by Mills and Jones (23) when they suggested two forces are controlling Cu absorption in vegetables.These forces are including: pH imbalance and an excess of nutrients such as phosphorus.Regarding the Zn concentration results, an increase above the permitted standards (5-20 mgkg -1 ) was observed (32).There were a significant differences (p<0.01) for potatoes and lettuce.Moreover, tomatoes showed the same high significance (p<0.001).In contrast, Pb contents occurred within steady limits (0.465-1.5575) mgkg -1 for local tomatoes and imported lettuce.(5).In Iraq, leaded fuel is still used for vehicles and diesel generators, resulting in emitting large amounts of Pb.Lokeshwari and Chandrappa (20)mentioned that some soil factors might have an impact on stimulating plants to increase their Pb uptake, including: pH, particle size, cation exchange capacity of soil, root exudation and other physico-chemical parameters.Taghipour and Mosaferi, (32) described more factors such as moisture, temperature, soil properties and degree of plant maturity that could influence Pb uptake by plants.Fergusson (13) found that the ability of heavy metals to transmit and form stable coordinated compounds with organic and inorganic matter makes them potentially toxic.The same approach was obtained by Rapheal and Adebayo (26) when they suggested that heavy metals are able to interact with other soil organic compounds and absorbed by growing plants and that these elements become more dangerous in the form cation or when bound to short chain of carbon atoms rather than free state.In general, Iraqi soil is lacking Pb element therefore, it is obviously a substantial amount to be added along with aqueous phosphate fertilizers (6).The present results indicated various concentrations of Cd most of them are higher than normal levels (0.05-1 mgkg -1 ) as reported by (33).Enrichment of the soil with cadmium leads to its higher presence due to two combined factors; the first is its relative mobility whereas the second is its affinity to associated with organic matters (26).Generally, cobalt concentration in this study was higher than those obtained by Lawal and Audu (19).They found 0.12 to 1.14 mgkg -1 in onion and the lowest was recorded in tomatoes.Nickel, determined in this work, was above the critical limits (2.7 mgkg -1 ) which were recorded by the national agency for food and drug administration and controls (19).1) as compared with the study of Abdulazeeza and Azizb (1) which fall in the range between 0.037-0.503mgkg -1 .Another source of Ni is the corrosion of vehicular engines, especially the old ones (12).In a relevant research, Mahakalkar, et al. (22) described what is known as transfer factor (TF) of different heavy metals from soil to vegetables.They show that TF varied from metal to another depending on efficiency of plant species to accumulate a given metal.This might explain the variance of metals values observed among different vegetables in this study.Furthermore, Zeng and Mei (37) expressed that transportation and marking system is involving in elevating metals level than production sites.Recently, in Iraq groceries are scattered near the main road and high ways that make them in contact with automobile emissions.Furthermore, washing vegetables with river water at farming sites may increase these contaminants.FAO/WHO set certain limits for the daily intake of some heavy metals underlying terms of Provisional Tolerable Daily Intake (PTDI) (17).These standard limits depending on an average weight of adult person (60 kg) consuming at least 98 g of vegetables per day.According to equation mentioned before by Elbagermi, et al. (11), the obtained results for imported and local vegetables were shown in Table 3. Table 3 exhibited the values of heavy metals daily intake for both imported and local vegetables.The results revealed that estimated daily intake of heavy metals in this study is above those documented in Misurata markets (11).But still within acceptable limits, as recommended by FAO/WHO (214µg, 3mg, and 60mg) for Pb, Cu and Zn respectively except Cd reported more than its permissible limits (60µg per day) (17).It is recommended that healthy diet should include the consumption of large amount of vegetables to provide the body with essential and necessary compounds which supporting the survival underlying what is known as protective food.Unfortunately, vegetables as plants have the ability to accumulate higher concentrations of heavy metals from different pathways.They are the main supplier of these contaminants to food chain due to their location in life system.This study represented an attempt to evaluate the values of these heavy metals in various vegetables edible parts widely consumed by peoples in Baghdad city and to ensure their adverse effect on human being.Most of detected metals, in this study are above the recommended levels proposed by FAO/WHO and other International Agencies.Consequently, it would be helpful to investigate the importance of long term exposure to food stuffs with such contaminants and giving awareness to their influences on public health.Moreover, the daily up take of some heavy metals was calculated in order to estimate the limits of those contaminants in daily meal Figure 2 . Figure 2. Mean concentrations of Cu levels (mgkg -1 ) in imported and local vegetables *** (p <0.001)In some lettuce samples, Cu was in safe limits and this interpreted by Parvin et al. (25) who pointed that cupper concentrations are corresponding to chlorophyll richness in leafy vegetables.However, imported turnips indicated a low cupper value (6.35 mgkg - 1 Table1) which was similarly obtained by Mills and Jones (23) when they suggested two forces are controlling Cu absorption in vegetables.These forces are including: pH imbalance and an excess of nutrients such as phosphorus.Regarding the Zn concentration results, an increase above the permitted standards (5-20 mgkg -1 ) was observed(32). Figure 4 . Figure 4. Mean concentrations of Pb levels (mgkg -1 ) in imported and local vegetables ** (p <0.01) Figure 4 indicated significance of variation (p<0.01) with in lettuce.The variations in Pb concentration were because of the traffic intensity effect, since this heavy metal is emitted from cars exhaustion(5).In Iraq, leaded fuel is still used for vehicles and diesel generators, resulting in emitting large amounts of Pb.Lokeshwari and Chandrappa(20)mentioned that some soil factors might have an impact on stimulating plants to increase their Pb uptake, including: pH, particle size, cation exchange capacity of soil, root exudation and other physico-chemical parameters.Taghipour and Mosaferi,(32) described more factors such as moisture, temperature, soil properties and degree of plant maturity that could influence Pb uptake by plants.Fergusson (13) found that the ability of heavy metals to transmit and form stable coordinated compounds with organic and inorganic matter makes them potentially toxic.The same approach was obtained by Rapheal and Adebayo(26) when they suggested that heavy metals are able to interact with other soil organic compounds and absorbed by growing plants and that these elements become more dangerous in the form cation or when bound to short chain of carbon atoms rather Figure 5 . Figure 5. Mean concentrations of Cd levels (mgkg -1 ) in imported and local vegetables No significant differences were found among collected vegetables.In this study, Cr levels ranged between (1.025-1.7475)mgkg -1 which was close to what was noted by Mutune, et al.(24) in an industrial area when they set limits between 1.19and1.24mgkg -1 for spinach and spider plants respectively.Chrome plates corrosion of vehicular motors may increase the emissions of this element(12). Figure 7 . Figure 7. Mean concentrations of Co levels (mgkg -1 ) in imported and local vegetables * (p <0.05)Generally, cobalt concentration in this study was higher than those obtained by Lawal and Audu(19).They found 0.12 to 1.14 mgkg -1 in onion and the lowest was recorded in tomatoes.Nickel, determined in this work, was above the critical limits (2.7 mgkg -1 ) which were recorded by the national agency for food and drug administration and controls(19). Figure 8 . Figure 8. Mean concentrations of Ni levels (mgkg -1 ) in imported and local vegetables *** (p<0.001)Statistical analysis for Ni concentrations indicated a significant variation (p<0.001) in tomato.Lettuce as (GLV) reported high value
2019-04-10T13:12:40.574Z
2018-10-20T00:00:00.000
{ "year": 2018, "sha1": "ea55793cfe610e695feea3ba35c28357c9cba74c", "oa_license": "CCBY", "oa_url": "https://jcoagri.uobaghdad.edu.iq/index.php/intro/article/download/39/17", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea55793cfe610e695feea3ba35c28357c9cba74c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
119225799
pes2o/s2orc
v3-fos-license
The Hopfield-Kerr model and analogue black hole radiation in dielectrics In the context of the interaction between the electromagnetic field and a dielectric dispersive lossless medium, we present a non-linear version of the relativistically covariant Hopfield model, which is suitable for the description of a dielectric Kerr perturbation propagating in a dielectric medium. The non-linearity is introduced in the Lagrangian through a self-interacting term proportional to the fourth power of the polarization field. We find an exact solution for the nonlinear equations describing a propagating perturbation in the dielectric medium. Furthermore the presence of an analogue Hawking effect, as well as the thermal properties of the model, are discussed, confirming and improving the results achieved in the scalar case. Introduction The spurring suggestion that Hawking radiation [1,2] could be produced in a non-gravitational physical context [3], has triggered the investigation of a plethora of physical systems able to mimic the basic kinematics at the root of the thermal pair production associated with a black hole [4,5]. Among these, a very interesting option is represented by electromagnetic analogous systems in dielectrics, in which an electromagnetic pulse is made propagate and interact within a dispersive non-linear material. Due to the Kerr effect [6,7] the pulse generates a refractive index perturbation, whose properties can be adjusted to give rise to (black and white hole) horizons for the electromagnetic field, as first discovered in [8] and then discussed in several papers [9][10][11][12][13][14][15][16][17][18]. In order to study this system in presence of dispersion, as well as the analogue Hawking radiation that it could produce, a model which describes the quantum interaction between the electromagnetic field and the matter field is needed. An interesting starting candidate for this purpose is the Hopfield model [19,20]. We recall that the Hopfiel model describes matter simply as a set of resonant oscillators, nonetheless it can faithfully reproduce the dispersive behaviour of the electromagnetic field thanks to the interaction with the dipole field [21], indeed yielding the correct (Sellmeier) dispersion relations. As far as we are interested in frequencies far from the absorption region, we do not take into account absorption in our discussion, which would require a much more involved approach. To analyse the effects generated by the presence of an inhomogeneous perturbation propagating in the medium, one has to deal with different inertial frames. To this aim a relativistically covariant version of the model was developed in [22]. In the current paper we base our analysis on a further refinement of the relativistically covariant Hopfield model, dubbed Hopfield-Kerr model, in which a self-interacting polarization term is added to the Lagrangian to describe the intrinsic non-linear effects of the dielectric medium. This work represents an improvement with respect to [23], in which a perturbative analysis of photon production was made through the quantization in the lab frame in a simple fixed gauge, and to [16], in which a non-perturbative deduction of thermality was accomplished in a simplified scalar model. See also [24,25] for an exact quantization of the covariant Hopfield model. We eventually stress that the Hopfield-Kerr model is a more rigorous and fundamental reference model with respect to the ones existing in the literature concerning dielectric black holes, particularly because it automatically includes optical dispersion and the non-linear effects of the medium. The main goal of this paper is the description of the thermal behaviour of the Hopfield-Kerr model, in order to complement and generalize the results found in [16,18]. The scheme of the paper is as follows. In section 2 we study the quantum fluctuations living on a generic background solution of the non-linear equations of motion, finding out that our model gives rise to a negative Kerr effect on the physical spectrum. Besides, in section 2.2, an exact solitonic solution for the equations of motion of the Hopfield-Kerr model is reported. In section 3 the analysis concerning the thermal behaviour of the model is undertaken following the seminal procedure introduced by Corley [26]. The results found for the temperature are in full agreement with [16] and an identification of the long-wavelength modes is presented. The paper is also provided with some appendices. In appendix A we talk over the different possible configurations available in the near-horizon analysis of the equations of motion. In appendix B we show the relation between the microscopic parameters of the model and the physical ones. In appendix C we derive approximated solutions of the physical dispersion relation in the linear region, while in appendix D coalescence of branch points in the limit as k 0 → 0 is also discussed. As regards the notation, we use natural units throughout the paper, except when explicitly expressed, as well as the mostly minus signature. We shall use bold font, e.g. x x x or k k k, for the spacetime four-vectors, whose components are x µ or k µ , whereas the spatial components will be indicated as x or k. We shall use ω := v µ k µ for the frame-invariant laboratory frequency, and we will use k 2 , for example, meaning the scalar k k k 2 = k k k · k k k. The relativistic Hopfield-Kerr model and an exact solution Let us consider the relativistic Hopfield model with a single polarisation field with resonance frequency ω 0 , as presented in [22]. The Lagrangian density is We now add a nonlinear self interaction (Kerr nonlinearity) modifying the Lagrangian to The totally symmetric rank-four tensor σ σ σ has the property that the contraction of any of its indexes with v v v produces a vanishing result. 1 We now assume homogeneity and isotropy of the tensor, which means that it is constant and invariant under the action of the little group G v v v : the subset of the proper Lorentz transformations leaving v v v invariant. Since v v v is timelike, this is a compact group isomorphic to SO(3). From the representation theory it follows immediately that the space of rank four tensors invariant under G v v v is a three-dimensional vector space of the form Since σ σ σ is totally symmetric, one must have σ 1 = σ 2 = σ 3 =: σ/4!. Hence, taking into account the constraint v µ P µ = 0, where P 2 := P P P · P P P = P µ P µ . The equations of motion then take the form together with the defining constraint v µ P µ = 0. 2.1. Linearized quantum theory. We are now interested in studying the equations of motion for the fluctuations lying on a given background solution of the Hopfield-Kerr equations of motion. This can be done through a linearisation of the Lagrangian. If we define the quantum fluctuations of the fields w.r.t. a background solution to be A A,P P P andB, so that where (A A A 0 , P P P 0 , B 0 = 0) represent the generic background solution, the Lagrangian density can be written as (2.10) It is convenient to consider a background solution which, for the polarization field, takes the form where ζ ζ ζ satisfies The linearisation is undertaken by dropping out the last two terms in L Kerr : There are three polarizations forP P P (which satisfiesP P P · v v v = 0): one parallel and two perpendicular to ζ ζ ζ. We can treat these modes separately and write P P P 2 − 2(ζ ζ ζ ·P P P ) 2 = 3P P P 2 , ifP P P ζ ζ ζ P P P 2 , ifP P P ⊥ ζ ζ ζ . (2.14) This seems to suggest that the shift from the linear Hopfield Lagrangian to the Hopfield-Kerr linearised Lagrangian could be equivalently achieved via the simple modification: while keeping χω 2 0 fixed. This is implemented by introducing a modified space-dependent 2 susceptibility and resonant frequency:χ where, in general, δχ(x x x) depends on the polarization: Notice that, independently from the specific solution, δχ(x x x) is always positive. Now we are interested in analysing how the refractive index changes due to the propagating perturbation. For the transverse modes the dispersion relation in the lab frame 3 (see eq. (3.5) for the DR in a general frame, for a visual representation see fig. 1) is whose gradient gives 20) so that the phase and group velocity in the lab frame are where we have definedω = ω 0 1 + 4πg 2 χ. The phase refractive index is In the presence of a background solution the new index becomes . (2.24) From here we see that 25) which means that the perturbation induces a decrease in the phase refractive index on both branches (see the following discussion). For the group velocity we get (2.26) Varying χ and ω 2 0 as above, the invariance of χω 2 0 implies the invariance ofω 2 − ω 2 0 as well. By taking the derivative of eq. (2.26) w.r.t. ω 2 0 , keepingω 2 − ω 2 0 fixed, one easily finds that such derivative is negative in both branches. Since δχ(x x x) is positive we again get the same result as for the phase refractive index. This means that the relativistic linearized Hopfield-Kerr model realises a negative Kerr effect 4 on both branches of the transverse spectrum (we assume the coupling constant g to be positive), both for the phase refractive index and for the group refractive index. The aforementioned behavior could be amended by assuming σ < 0, thus obtaining the expected positive Kerr effect. The evident drawback is that the energy in the latter case would be unbounded from below. Still, we can stress that the original potential for the polarization field could also be corrected by a sixth order perturbation with the right sign in order to obtain again an energy bounded from below. This point of view is shared by the classical anharmonic model for centrosymmetric media, as discussed e.g. in [27], where the potential energy associated with the restoring force acting on an electron involves a negative quartic term, which would be responsible for an energy unbounded from below. In that case, one assumes that the electronic displacement is small in such a way that higher order terms (which are implicitly assumed) are safely negligible. We limit ourselves to consider our ansatz for a quartic polarization term as the lowest order correction to the polarization field. We can notice also that the original behaviour can be reproduced in metamaterials, with the only requirement that the Kerr index be negative. Much more interestingly, this behaviour is the one required for the so-called black hole lasers [28][29][30]. It is to say that, for simplicity, we called this phenomenon a Kerr effect. Notice however that for small δχ(x x x) the variation of the refractive index is proportional to P P P 2 0 rather than to the intensity of the electromagnetic signal. Nevertheless, for the solitonic solution we are to introduce in the next subsection, eq. (2.40), we have that P P P 2 0 ∝ B 2 and we can talk about Kerr effect in a proper way. 2.2. An exact solitonic solution. It would be interesting to find a particular background solution of the nonlinear equations of motion, able to describe the propagation of a laser pulse in a nonlinear dielectric medium. We expect the profile of the laser pulse to evolve in time very slowly w.r.t. the pair-creation process we are interested in. Hence we can concentrate our attention on static solutions in the comoving frame, of the form where α is a constant vector and ζ ζ ζ is as reported in eq. (2.12). We will also impose B = 0, so that ∂ µ A µ = 0, and set z := α · x. This way, the equations of motion take the form 1 4π (2.29) 4 By negative Kerr effect we mean a decrease in the refractive index of the medium in response to the passage of an electromagnetic pulse. The second equation suggests to take A µ = ζ µ h(z), while the first one suggests to take α · ζ = 0, which corresponds to B = 0. Then we have Focusing on the particular solution α = α v, we can integrate the first equation, yielding 5 h (z) = 4π g α f (z), (2.32) and insert it into the second one, obtaining This can be integrated and rewritten in the form where K is an integration constant. If we now assume that the condition 4πg 2 v 2 χ > 1 holds, we can also assume K = 0, so that the integral considerably simplifies. Indeed, in this case we can write which can be integrated to Thus we have found that the Hopfield-Kerr model admits an exact solitonic solution, which, in the comoving frame and for the polarization field, takes the form where ζ ζ ζ is as defined in eq. (2.12). It is interesting to underline that the electric field associated with this solution in the comoving frame is zero, whereas the magnetic field is This fact is important for a correct interpretation of the refractive index modification induced by this solitonic solution as a Kerr effect, as outlined at the end of the previous subsection. Note that for standard transparent materials the Sellmeier coefficient 4πg 2 χ is typically smaller than 1. This means that the solitonic solution, eq. (2.37), is acceptable only as long as | v| is large enough. If we define ν to be the velocity of the comoving frame w.r.t. the dielectric frame, i.e. v 2 = γ 2 ν 2 , we have as a condition for the existence of the solitonic solution | ν| > ν c := 1 It is not obvious whether and why we should expect the existence of the solitonic solution only for velocities (of the solitonic envelope) larger than the critical value ν c . It may be related to the influence of the soliton on the refractive index. From now on we will only consider positive velocities parallel to the z-axis, in particular we will set 6 v v v = (γ, 0, 0, −γv), where v will be the absolute value of the chosen frame's velocity w.r.t. the dielectric frame. In turn, this implies that the background solution will only depend on the spatial variable z. For later convenience, according to the foregoing conventions, we rewrite the solitonic solution, eq. (2.37), in the form where we have defined where τ corresponds to the amplitude of the soliton and where β is inversely proportional to the width of the solitonic envelope. This means that in the limit ν → ν + c the solitonic solution flattens on the real line and fades away. On the thermality of the Hopfield-Kerr model We are now interested in the thermal properties of the Hopfield-Kerr model, independently from the particular background solution adopted. Anyway, in order to simplify the calculations, we restrict ourselves to background solutions propagating only along the z-axis. The technique used to infer thermality for our model is based on the seminal work [26], as well as on the refined method proposed in [31]. The basic idea is not very different from the staple technique used to solve the Schroedinger equation in a smooth space-dependent potential, which exhibits a turning point [32]. On the one hand, we consider the equations of motion far from the inhomogeneity, which are approximately linear. We exploit the multicomponent WKB method (see [33]) to show that the solutions of these equations are superpositions of plane waves, which are linked to the solutions of the asymptotic physical dispersion relation. Through this general analysis it is also possible to gauge the asymptotic behaviour of these modes' amplitudes, going first-order in the WKB expansion. Since we are interested in matching these asymptotic solutions with the ones valid near-horizon, where the WKB approximation breaks down, we have to push this WKB analysis as close to the horizon as possible. On the other hand we study the near-horizon solutions, namely the solutions of the equations of motion in which the potential has been linearized near the horizon. These are obtained through a transformation of the equations of motion to the Fourier space. Following the foregoing argument, we are interested in considering these solutions as far to the horizon as possible, to the limit of their validity range. If the variation of the refractive index on the turning point is slow enough, there always exists a so called linear region in which both the near-horizon analysis and the WKB analysis hold, allowing the matching between their solutions to be undertaken. In this approximation, the near-horizon solutions corresponding to the short-wavelength modes can be used to estimate the temperature of the model, for in this case the monotone branch mode decouples from the other modes, giving rise to subdominant scattering phenomena w.r.t. the Hawking emission (see appendix C). Moreover we show that a better identification of the two long-wavelength modes w.r.t. the ones present in the literature is feasible. Nevertheless we put off the delicate issue of the grey-body factor computation to a future work. 3.1. Far horizon WKB analysis. The linearized equations of motion of the Hopfield-Kerr model, in the Feynman gauge (ξ = 4π) and without writing the equation for the fieldB, are: In order to solve this PDE system we firstly have to separate variables, to get an ODE system, secondly we have to implement the WKB method (see [16,33]). This can be done by looking for solutions of the form Now we proceed with the expansion of the equations of motion in orders of . 3.1.1. 0 th order. At this order the equations of motion take the form: from which we deduce the new space-dependent dispersion relations (DRs). They are very similar to the linearmodel DRs [24], but with the fundamental modifications ω 0 →ω 0 (z) and χ →χ(z). The DR we are interested in is the transverse (or physical) one: Since this is a quartic equation, its exact solutions are too involved to be useful. Hence we will limit to the solutions of the physical DR approximated in the large-η limit, where η is defined below in eq. (3.24) (see also appendix C). Yet, remember we are interested in the linear region behaviour of the modes. In this region (as well as in the near-horizon region) the space-dependent refractive index, n(z) :=n f (z), defined in eq. (2.24), can be linearized near the horizon, i.e. n(z) 1 v − |κ|z. Without loss of generality, we have shifted the z variable in order for the horizon to be displaced at z = 0 and we have defined 7 κ := dn dz (0), which is negative on the black hole horizon. Still, since the WKB analysis breaks down near the horizon, we are not allowed to move too close to it. At any rate, for small enough |κ|, a linear region in which both the linearisation and the asymptotic WKB analysis are valid exists (see eq. (3.37)). The approximated solutions of the physical DR are reported in eqs. (C.7) to (C.9). The integral of such solutions represents the behaviour of the modes' phases in the transverse DR, which are reported in table 1. Since we are interested in the matching of such asymptotic solutions with the near-horizon ones, we are also interested in the zero-order amplitudes of the fields. Given that the zeroth order equation leaves one solution undetermined (M (0) has to be considered on shell), in order to obtain such amplitudes we have to go first order in the expansion. 7 The linking between the surface gravity and the derivative of the refractive index is : 3.1.2. 1 st order. The equations of motion restricted to the first-order in terms of take the form: where (c.f. with matrix (52) of [16]) In order to find the zeroth order amplitude for the fields, we follow the theory of the multicomponent WKB method (see e.g. [33]). As shown in [24], on the transverse branch, M (0) admits two linearly independent right null vector fields, which are where e i , i = 1, 2, are four-vectors satisfying e i · k = 0 and e i · v = 0. There are obviously also two linearly independent left null vector fields, which will be named λ i , i = 1, 2, which are the transposes of the right null vector fields. The zeroth order amplitude can be devoleped over the basis made by ρ 1 , ρ 2 and other six linearly independent not-null vector fields, i.e. A A A 0 P P P 0 = 8 k=1 ρ k a k . Yet, since eq. (3.3) must hold, we have that a k = 0, ∀k = 1, 2. Thus if we insert this expression into the first order eq. (3.6) and project on the left null eigenvectors we have where a k := a k (z), with k = 1, 2, are the coefficients to be found. To compute them an explicit expression for e i = e i (k 0 , k) is needed. It is not difficult to find two linearly independent vectors satisfying the above mentioned orthogonality relations for e i , giving (3.10) Now, since this equation has to be solved on-shell, we turn to the 2D-approximated case (k x = k y = 0), for which tractable DR roots are available (see appendix C). In this case we very simply have (3.11) At this point the explicit form of the differential equations in eq. (3.9) can be computed. Yet, due to the particular (almost diagonal) structure of the matrix operator M (1) , these differential equations are decoupled equations for the amplitudes a 1 (z) and a 2 (z), which turn out to be identical. The solutions in the linear region are summarized in table 1. Modes Counter-propagating Long-wavelength (Hawking) Short-wavelength Table 1. Amplitude and phase factor of the WKB-approximated field solutions in the linear region ( = c = 1). 3.2. Near horizon analysis and matching. Let's now concentrate on the field equations near the horizon. Let's start by considering the linearized equations of motion, eq. (3.1). In the Feynman gauge and under a spatial Fourier transformation, we can explicitly express the field µ in terms of the polarization field: Substituting into the second equation we obtain a single differential equation for the polarization field: Bearing in mind eqs. from (2.15) to (2.18), we can linearize the susceptibility very near the horizon: where α := d dz 1 χ (0) is positive on the BH horizon and such that |α|z 1 χ . Moreover 1 χ(0) = 1 χ + δχ(0) (see appendix B for more details), while δ(k k k) is the Dirac delta function. The differential equation we obtain for the polarization field is then: thus this equation has two poles of order one in ± (k 0 ) 2 − (k x ) 2 − (k y ) 2 , which are regular singular points. We can conclude that our equation is a Fuchsian differential equation. This Fuchsian structure of the field equations near the horizon is an important clue in favor of thermality, since it is a recurrent behaviour observed in different frameworks [16,26,31]. From now on, in order to simplify our treatment, we will use the 2D-reduction approximation, i.e. we will fix k x = k y = 0. This means that the two poles mentioned above reduce to ±k 0 . For the 4D analysis see section 3.3. Since we are only interested in the physical part of the fields, we can project from the left, e.g., on the (k k kindependent) transverse direction e e e 1 , given by eq. (3.11). DefiningP = e µ 1P µ we obtain whose solution isP where C is an integration constant and we have defined for simplicity For later convenience note that It is to say that in eq. (3.19) we have reabsorbed a term proportional to k 0 in the integration constant and that we have neglected a term of the form (1 − 4πg 2χ (0)γ 2 v 2 ) kz χ(0)|α| , on behalf of the fact that it would only amount to a very small shift in the saddle points. Now, in order to get the field solutions we are looking for, we have to re-transform the polarization field in the x-space: The contour Γ has to be homotopic to the real line and it has to be chosen in order for the mode solutions to decay inside the horizon, as these are the boundary conditions relevant for particle creation (see [26]). Moreover the contour has to be chosen in order for the integral to converge. Before approaching the computation of P (t, z), let us undertake the following change of variable: we obtain With these definitions eq. (3.22) can be written as where we have defined the branch points Note that u ± b > 0 ∀ k 0 > 0. η defined as above has to be considered as the "big parameter" to be used in the saddle point approximation: as usual in the Cauchy approximation. Before pursuing further calculations, we stress that in previous papers [16,18] a different approach was assumed, i.e. for the saddle point approximation the functions(z, u) := i uz − u 3 3 was taken into account in place of (3.27). As a consequence a quadratic equation was obtained and suitable integrals around the branch cuts were considered (see in particular [18]). In the following we shall compare our present approach with the aforementioned ones. The integrand possesses four saddle points, which are obtained by solving the quartic equation ∂ ∂u s(z, u) = 0. Since its exact solutions are too involved to be of any usefulness, we solve this equation by expanding it, as well as its solutions, in orders of η −1 : At zeroth order we get where u ± u (0) ± are the "standard" saddle points, whose higher order corrections are of limited interest, hence we can simply write As regards u ±s , at first order we get: Under the condition χ|α|z 1 this yields: We stress that these two saddle points are usually overlooked in the literature, yet we take the view that they cover a very important role in this analysis. As a consistency condition for our expansion, we require that the first order solutions above be much smaller than the zeroth order ones. This implies z 1 |α|χ Note that the peak emission frequency (see eq. (A.6)) is proportional to κ, hence if κ was large enough no linear region would be present. 3.2.1. On the choice of the contour, branch cuts and steepest descent paths. As mentioned before, the choice of the contour has to be made in order to fulfil some staple requirements. The requirement of the convergence of the integral is achieved by a contour running to infinity along any direction of the complex u-plane in which the integrand decays to zero. This is equivalent to require that the contour asymptotes to a region where Re[s(z, u)] < 0. Specifically note that at large u the function s(z, u) is dominated by the cubic term. We then have to require that in the allowed asymptotic regions Re[−iu 3 ] < 0 holds. This implies 8 that the contour must asymptote to any of the following three regions of the complex u-plane Convergence regions amount to valleys of the integrand. Another issue regards the choice of the two branch cuts, which arise from the complex natural logarithm, spreading from the two branch points u ± b . We adopt the simplest possible choice, which is to consider vertical cuts going upwards to +i∞. Later on, we will have to use the method of steepest descent (or saddle point method) to compute the contributions to the integral (3.26) coming from the saddle points. Steepest descent paths can be obtained by imposing where I 0 is a constant. Substituting u = a + ib into s(u, z), where a and b are obviously the real and imaginary part of u, as well as neglecting the sub-leading logarithmic terms for simplicity (which give contributions only near the branch points), we obtain In a more explicit form, In order to guarantee the reality of the above expression we have to find the regions where the left hand side function is non-negative (remember we are considering z > 0). For large |a| the function meets the oblique asymptotes ± |a| √ 3 , while for a → 0 + (a → 0 − ) we have a vertical asymptote as long as I 0 > 0 (I 0 < 0). 3.2.2. Mode functions inside the black hole (z < 0). A possible choice for the contour inside the horizon, which we shall call Γ in , is portrayed in fig. 2. In this case the value of the integral is dominated by the contribution of the saddle point u − = −i |z|, from which the contour passes. Using the saddle point approximation at the leading order (in the limit η → ∞ it becomes asymptotically exact, see e.g. [34]) we have Inserting the value of the saddle point we obtain < 0 ⇔ sin(3θ) < 0 ⇔ 1 3 (2πn − π) < θ < 2 3 πn, n ∈ Z. Figure 2. Schematic (not to scale) representation of the complex u-plane, in which are depicted the forbidden asymptotic regions (shaded regions), the branch cuts (red and zigzagged paths), the branch and saddle points, as well as the inside-horizon contour Γ in (blue curve). We can see that this solution decays exponentially inside the horizon, as required from the boundary conditions. Note that, had we chosen a contour passing through the saddle point u + , it would have led to a growing mode function inside the horizon. The saddle points u ±s , instead, would have led to oscillating modes. This facts justify the choice made for the inside-horizon contour of the integral. 3.2.3. Mode functions outside the black hole (z > 0). The outside-horizon case has a richer behaviour than the previous one. Indeed now the saddle points u ± are purely real and, since z appears as an external parameter in this framework, it is possible to observe, as z varies, different hierarchies for the saddle and branch points in the complex u-plane. First of all notice that, according to the linear region assumptions on the parameters, we always have 0 < u −s < u − b < u + b < u +s . This implies that the u ±s saddle points can be ignored in this discussion. The different possible hierarchies for the branch points and for the saddle points u ± , as z varies in the near horizon range, are then: Configuration (a) could be thought of as "standard", but a priori it's not clear if it should be considered as the relevant one. The issue of its preponderance w.r.t. to the other hierarchies is talked over in appendix A. From now on, if not explicitly stated, we shall only deal with the standard configuration (a). We shall now show that: • the leading-order contributions coming from the u ± saddle points, can be correctly identified with the WKB short-wavelength modes, as usual; Figure 3. Outside-horizon contour (blue) for the standard configuration, eq. (3.44). The notation is as in fig. 2. The dashed parts of the contour are taken at Re [ηs(u, z)] constant and asymptotically in the allowed regions, such that their contribution is negligible. • the leading-order contribution coming from the u −s saddle point, can be correctly identified with the counter-propagating mode; • the leading-order contribution coming from the u +s saddle point, can be correctly identified with the Hawking mode. To prove the first three statements let us adopt as a contour, now tagged Γ out , an homotopical modification of Γ in which passes through every saddle point, as depicted in fig. 3. In this case the relevant contributions to the integral, in the large-η limit, are P out (t, z) P − (t, z) + P −s (t, z) + P +s (t, z) + P + (t, z). (3.47) The leading-order contributions for the u ± saddle point read We clearly see that these contributions perfectly match, in the linear region, with the asymptotic WKB modes with short wavelength, as can be seen from the amplitude and phase factor of such modes, reported in table 1. As for the leading-order contribution for the u −s saddle point, we have which can be correctly interpreted as the counter-propagating contribution. As for the leading-order contributions due to the u +s saddle point we have which, in view of the perfect correspondence between the amplitude and the logarithmic part of the phase factor, can be identified with the Hawking mode. We take the view that the remaining mismatching regarding the linear term in z of the phase, is due to the extreme sensibility to accuracy in the calculations needed to properly manage the Hawking state. As regards the branch-cuts contributions, which appear in the case one adopts the quadratic equation for the saddle points as in [16,18], by adopting a path circumventing the branch-cuts 9 , a straightforward calculation in the limit as z → ∞ leads to Still, this asymptotic expansion is not compatible with the approximation in which the dielectric perturbation δn(z) is small, i.e. |δn(z)| 1, hence we take the view that these solutions shall not be identified with the counter-propagating and Hawking modes. A different view under different assumptions is found in [18], where for the refractive index one allows n → 1 as z → ∞. We limit ourselves to notice that, if we were to identify these solutions with the counter-propagating and Hawking modes, for the counter-propagating mode there would be no correspondence at all with the appropriate WKB mode, while for the Hawking-mode the only thing that matches would be the amplitude, whereas the phase factor would be utterly mismatching. We notice that, in our perturbative approach, the two new perturbative saddle points appear in such a way that there is no more need to consider branch cut contributions (as no path strictly circumventing the cuts is necessary, see fig. 3), and the matching with asymptotic states is straightforward. In other terms, also the short momentum states which were "nested" in the branch cut contributions in [16,18], appear explicitly in our calculations. 3.2.4. Thermality of pair creation. Let's now analyse the thermal properties of the three configurations reported in eqs. (3.44) to (3.46). As a consequence of the construction of the states in the near horizon region, the temperature of the Hawking emission can be deduced from the ratio between the near-horizon states which match with the asymptotic negative and positive norm states, respectively. In formulas (with restored units): (3.52) To do so, we focus our attention on the amplitude of P ± , eq. (3.48), in which all the information related with the different hierarchies is encoded. Let's start from configuration (a). We have [18]. In [16], in a zeroth order approximation, the two branch cuts coalesced into a single one. We underline that this is exactly the same result found in [10,16], as well as in [18], since the geometries considered in all these works are conformally identical. For the configurations (b) and (c) it is easy to show that no thermality is associated with the two of them. 3.3. Near-horizon 4D analysis. In the 4D case the transverse basis vectors are given by eq. (3.10). As can be easily seen these two transverse vectors aren't mutually orthogonal, implying that a projection of the Fuchsian equation (eq. (3.15)) on these transverse vectors would give rise to coupled equations. To prevent this fact to occur we look for a new transverse basis vector,ẽ e e 2 , orthogonal to both e e e 1 , k k k and v v v: e e e 2 := αe e e 1 + βe e e 2 such thatẽ µ 2 e 1µ = 0. According to this new basis vector we can now project eq. (3.15) over either e e e 1 orẽ e e 2 without mixing field components. In particular, projecting over e e e 1 yields exactly eq. (3.16), except that now the field component will depend on k k k and that the regular singular points will be ±k : The equation projected overẽ e e 2 will be different due to the contribution of the non-zero commutator betweenẽ e e 2 and ∂ kz : [ẽ e e 2 , ∂ kz ] = 1 kx , but since this represents a pure gauge term, it can be shown that the two equations are physically equivalent. Considering thus the projection over e e e 1 and defining as aboveP (k k k) = e µ 1P µ (k k k) we obtain above we have defined for simplicity Notice for further convenience that which is remarkably independent fromk. As above, we have to re-transform the field in the x-space. Note that since k x and k y are conserved quantities they are not to be integrated and shall be kept fixed. where the contour is as in section 3.2. We now follow the same procedure as for the 2D-reduced case, introducing the variables u and η as defined in section 3.2. We similarly get This integral has exactly the same structure as the integral in eq. (3.26), i.e. it possesses four saddle points and two branch points, being: The discussion regarding the matching between near and far horizon modes is as in the 2D case. The only thing we are interested in here is thermality. According to the saddle point method we get for the P ± contributions exactly eq. (3.48), with the obvious substitutions. Then the same procedure presented in section 3.2 applies to this case. It can be shown that notwithstanding the changes due to the mass term, the thermal result is precisely the same: yielding β = 16π 2 g 2 γ 2 v α , which returns for the temperature exactly eq. (3.56). Conclusions In this paper we presented the Hopfield-Kerr model, an upgrade of the covariant Hopfield model [22,24], aiming at the description of non-linear effects in dielectric media and, in particular, of the Kerr effect. Such description is achievable through the introduction of a fourth-order self-interacting polarization term in the Hopfield Lagrangian. We analysed both the features of the inhomogeneity described by the model and its thermal properties, grounding on a linearization of the equations of motion, in order to demonstrate analogue Hawking-like emission. Our main results are: the reckoning of an exact solitonic solution for the full model; the analytical proof that the Hopfield-Kerr model exhibits thermality, confirming the result for the temperature found in the simplified scalar model [16]; the discovery of the correct near-horizon solutions associated with the long-wavelength asymptotic modes (Hawking and counter-propagating). As regards the inference of thermality we adopted the standard procedure for this kind of analysis [26], which consists in a mixture of WKB technique and Fourier transform for finding approximate solutions to the equations of motion of the linearized model. Far and near horizon solutions has to be properly matched, in order to identify the physical mode solutions. The identification of short-wavelength modes, which are the ones enabling the computation of the temperature of the emission process, is a relatively easy task. Yet we can't say the same as regards the long-wavelength modes. We showed that, in the near-horizon treatment, these modes originate from two sub-leading saddle points, which are never been considered in the literature. We also underline that the system considered presents different possible configurations w.r.t. thermality, some of which doesn't appear to be thermal at all. At any rate the standard configuration, which is the one usually considered in the literature, seems to be the dominant one (see appendix A). As regards the model itself we showed that the chosen non-linear modification of the Lagrangian is equivalent, in the linearized theory, to a spacetime modification of the microscopic parameters ω 0 and χ, in such a way that χω 2 0 is left invariant (see eqs. (2.16) and (2.17)). We also found that the inhomogeneity described by the theory gives rise to a negative Kerr effect, corresponding to a refractive index decrease, in contrast with the phenomenology of standard dielectric media. Appendix A. On the relevance of the standard hierarchy for saddle and branch points In this appendix we present three heuristic arguments supporting the thesis of the prevalence of the standard configuration for the saddle and branch points displacement, eq. (3.44), w.r.t. to the other configurations, eqs. (3.45) and (3.46). Note that the linear region condition, eq. (3.37), automatically implies the standard configuration. Yet a priori, this is not mandatory, since it is just a condition which is implicit in our approximation scheme. A.1. Dimensionless steepest descent parameter. It is easy to see that the quantities introduced in eqs. (3.23) and (3.24) are not dimensionless 10 . A simple inspection reveals that We can then define dimensionless quantities as follows: where κ := |n (0)| is a natural length scale for the physics at hand. With this redefinition we obtain for the saddle and branch points (outside the horizon and without considering u ±s which, as mentioned in section 3.2.3, are irrelevant in this discussion):ū In order to understand their relative displacement we have to give an estimate for their values. As regards the saddle points, a reasonable upper bound for |κ|z is hence we can roughly say that |ū ± s | ∼ 10 −2 . (A.5) As regards the branch points, we mean to estimate them near the peak frequency of the emission spectrum, which we will call k H 0 . Hence we would need to estimate both k H 0 , η d and κ. It can though be shown that (see [10]) Then, the cancellation of k H 0 and κ in the expression for the branch points leaves us only η d to be gauged. To do so, notice that If we label with ω and k l respectively the frequency and wave number in the lab frame, we can say that where ω and k l have to satisfy the lab-frame dispersion relation According to the Cauchy approximation for the refractive index, we have where the correction δn(z) shall not be considered. This leads to In eq. (3.24) the denominator is adimentional, i.e. γ v c , where the c doesn't appear due to the adoption of natural units. The Cauchy approximation is reliable in the visible spectrum. As an example, let us consider λ l = 0.8µm (see [11]). For the other physical parameters we shall take: n 0 = 1.458, B = 0.00354µm 2 , v = 0.685c (we should have c/v ∼ n 0 ), γ = 1.37. With the above values, we get B · (k H 0 ) 2 ∼ 10 −7 , which yields This guarantees the condition |ū ± s | |ū ± b | to hold. This condition is associated with the standard diagram, where saddle points are external with respect to the position of the branch points. Still, this condition is neither mandatory nor privileged, at least it is not clear why it should dominate. A.2. A further length scale. Another possible way to approach the subtle problem of the choice of a length scale, is the one proposed in [35]: we identify the appropriate length scale by considering the integral in eq. (3.22), in particular by selecting the leading term in k z in eq. (3.19). This term is (A.14) Now we define the length (c.f. with eq. (7) of [35]) The ansatz is that this scale dominates the behavior of the emission process, i.e. we assume that, as in [35], the length scale d br is such that the physical system is not able to resolve distances shorter than the scale itself. This means that we can consider the following lower bound for z: As a consequence, we must assume for the saddle points the lower bound: In order to understand which configuration gives the leading contribution, we have to compare the aforementioned lower bound with the value of the branch points evaluated, as above, at k H 0 : To make this comparison let's notice that Hence, if β is not very near 1, and if k 0 ∈ (0, k H 0 ), the dominant contribution to the amplitude of pair-creation comes from the standard configuration, eq. (3.44), and thermality is recovered. A.3. Group horizon turning point. There is a further possible way to infer when the standard configuration is the one to dominate. In the presence of a group horizon, we have a turning point which can occur on the right of the horizon z = 0. Indeed, the equation to be satisfied is (see [16]) as well as, in particular, z GH (k 0 = 0) = 0 and z GH (k 0 > 0) > 0. Hence, apart for k 0 very near to zero, we obtain a turning point on the right of z = 0, and then we can expect that every z < z GH (k 0 > 0) eventually do not play any relevant role in the scattering process at k 0 > 0 fixed. In other terms, our guess is that the presence of the turning point enables to stay away from z = 0. As a consequence, although in the spontaneous process it is hard to justify a leading thermal contribution, in the stimulated process, with a suitable choice of the frequencies, and with a suitable enhancement of the stimulated contribution with respect to the spontaneous one, it should be still possible to recover a thermal spectrum, as well as thermality of the Hawking radiation. Still, it is remarkable that the mechanism contributing to the particle production is horizon-generated in all cases. Appendix B. Link with the physical quantities We want to link the physical quantities with the microscopic parameters of the model. From the phenomenological dispersion relation in the Cauchy approximation we can write: n(ω, z) = n 0 + Bω 2 + δn(z), (B.1) as well as n 2 (ω, z) n 2 0 + 2n 0 Bω 2 + 2n 0 δn(z). (B.2) The physical expression for the refractive index in the lab frame is shown in eq. (2.24). If we expand this expression in the small-perturbation approximation, i.e. δχ(z) ω 2 0 −ω 2 χω 2 0 , according to the notation of section 2.1, and in the Cauchy approximation, i.e. ω ω 0 , we obtain 11 : n 2 (ω, z) = 1 + 4πχω 2 0 g 2 ω 2 0 + δω 2 0 − ω 2 Appendix D. Coalescence of branch points as k 0 → 0 It is easy to see that in the limit k 0 → 0 we have coalescence of the branch points at u = 0. In line of principle, this coalescence would require a uniform asymptotic expansion, in order to reach an agreement between the limit for k 0 → 0 of the asymptotic approximation and the asymptotic approximation taken at k 0 = 0 (which should be a legitimate asymptotic expansion). In the following we show that no discontinuous behaviour occurs, i.e. that both taking the limit of the integrals and calculating the integrals at k 0 = 0, yield the same result. The main point is that a quite mild behaviour actually takes place: indeed, for k 0 = 0, no branch cut occurs in the equation for the polarization field. This implies that at k 0 = 0 no cut contribution arises and this is perfectly coherent with the fact that cut contributions vanish as k 0 → 0. It is worthwhile stressing that k 0 = 0 is not only an allowed parameter in the physics at hand, but it also corresponds to the main contribution to particle creation in the experimental situation, as verified by the group leaded by Faccio [9,11,12]. Let us start from the analysis of the original system, eq. (3.1), evaluated at k 0 = 0: As it is evident, there are no more branch cuts in the differential equation for the polarization. The solution of the e e e 1 -projected equation isP Following the procedure outlined above, we can compute the leading contributions to the Fourier transformed field P (t, z) outside the horizon, which are now only due to the u ± saddle points. Since no branch point is present we find |P − | 2 |P + | 2 = 1, (D.4) as expected. On the other hand, we recall that lim k0→0 x ± = 0, (D. 5) and from the foregoing analysis it is easily verified that lim k0→0 P cut± = 0. (D. 6) This confirms that there is no need for any sort of uniform asymptotic expansion.
2017-07-06T07:39:30.000Z
2017-07-06T00:00:00.000
{ "year": 2017, "sha1": "c6d1e0a9f0d18b9194f2b1c96985df3687513b0e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.01663", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c6d1e0a9f0d18b9194f2b1c96985df3687513b0e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268490574
pes2o/s2orc
v3-fos-license
Dupilumab‐associated ocular surface disease or atopic keratoconjunctivitis not improved by dupilumab? Upadacitinib may clarify the dilemma: A case report Abstract Dupilumab‐associated ocular surface disease is a common clinical sign appearing in patients with atopic dermatitis (AD) just few months after dupilumab treatment start, developing in about 25% of patients. Atopic keratoconjunctivitis (AKC) is a well‐identified clinical entity, defined as a chronic inflammatory disease of eye that affects 25%–40% of patients with AD. Most clinical signs of ocular involvement in AD patients treated with dupilumab overlaps the AKC symptoms and signs. We supposed that Dupilumab‐associated ocular surface disease and AKC represent the same disease but differently called by dermatologists and ophthalmologists. AKC‐like disease may develop during dupilumab therapy as a consequence of alternative cytokines pathway activation (e.g. IL33) secondary to IL‐4/13 pathway block. The novel upadacitinib drug may bypass ILs pathway through Janus Kinases selective inhibition, avoiding positive or negative ILs feedback at the ocular surface level. In this case report, molecular analysis on conjunctival samples showed a lower ocular surface inflammation (lower expression of HLADR) although higher levels of IL4 and IL13 in a patient with AD and AKC during upadacitinib therapy, compared to prior dupilumab treatment. Target therapies in patients suffering from AD may prevent ocular and dermatological comorbidities improving quality of life before quality of skin and vision. including conjunctival hyperaemia, papillary reaction, and superficial punctate keratitis. 3,4A 2020 metaanalysis on 3303 patients demonstrated the efficacy and safety of dupilumab in controlling AD; however, conjunctivitis developed in 26.1% of patients. 1uch ocular surface secondary inflammation has been define as DAOSD and its pathogenesis is still poorly understood.Independent risk factors such as baseline AD severity, prior history of conjunctivitis, and local biomarkers (TARC, IgE, and eosinophils) increased risk of conjunctival involvement. 3,5everal pathogenic hypothesis for ocular surface disease development have been proposed. (1) By blocking interleukin 13, dupilumab may cause goblet cells hypoplasia, resulting in decreased mucin secretion, mucosal epithelial barrier dysfunction, and qualitative tear production failure. 62) Alternatively, upregulation of Th1 response may results as effect of dupilumab on Th2 signalling. 73) A lower dupilumab bioavailability at the conjunctiva, due to the decreased diffusion of the drug and increased elimination, results in a shorter duration of the effect of dupilumab in the respective part of the eye.(4) Finally, unmasking of pre-existent subclinical atopic or allergic inflammatory processes, local immunodeficiency and resulting local infections; increased expression of costimulating proinflammatory molecules (i.e., OX40 L) based on alterations in the immunological milieu; eosinophilia; reduced IL13 related mucus production; disruption of an immunemediated response of conjunctival associated lymphoid tissue may be implicated. 4opic keratoconjunctivitis clinical signs include papillary reaction, conjunctival hyperaemia, mucous filaments, meibomian glands dysfunction, mucocutaneous junction involvement up to trichiasis, punctate superficial keratitis, subepithelial conjunctival fibrosis, symblefaron, corneal neovascularisation and keratinisation. AKC-like disease appeared in the first weeks to months of dupilumab treatment and were mild to moderate. 7ased on these observations, it is possible that patients with AD who have preexisting ocular disorders may have a lower threshold for the development of severe ocular involvement as an exacerbation of preexisting milder conditions in some patients. 3padacitinib is a novel selective inhibitor of Janus kinase 1 approved for AD patients. A recent randomized, blinded, multicenter comparator clinical trial of 692 patients with moderate-tosevere AD demonstrated the superiority of upadacitinib in a more rapid skin clearance and itch relief with tolerable safety compared with dupilumab, with less outbreak of conjunctivitis (1.4% in upadacitinib group vs. 8.4% in dupilumab group). 8Recent evidences showed the improvement of dupilumab-associated conjunctivitis after switching to upadacitinib. 9,10e aim to define clinical and biological ocular inflammation in dupilumab-induced AKC and the restoration to the prior quiescent ocular surface after switching to upadacitinib therapy. | CASE REPORT An ophthalmological assessment was performed for a 54 yo female with severe AD from birth, previously treated with topic and systemic steroids, oral cyclosporine, antihistaminics, and dupilumab (from 2018 to 2023) with little benefits on AD except for head-neck region and eye involvement.The assessment was performed by a team of specialist ophthalmologists. Experimental procedures were performed according to guidelines established by the ARVO and adhered to the tenets of the Declaration of Helsinki and were approved by the Intramural Ethical Committee.The patient provided written informed consent to proceed with clinical and biomolecular analysis. At W0 patient was under dupilumab treatment.She stopped dupilumab and after a 12 weeks period of washout, she started upadacitinib 15 mg.After 8 weeks mean benefits involved Body Surface Area (from 20% to 0%), Eczema Area and Severity Index (from 16 to 0), Validated Investigators Global Assessment-atopic dermatitis (from 3 to 0), Dermatology Life Quality Index (from 12 to 0) and itch-NRS (from 8 to 0) (Figures 1 and 2).At W12 ocular surface improved with no signs of superficial punctate keratopathy and mucous filaments, no conjunctival hyperaemia, and reduction of papillary reaction (Figure 3). Molecular analysis of conjunctival samples by quantitative RT-PCR (Table S1) shows a reduction of HLA-DR expression and an increase of IL4 and IL13 expression from W0 to W12 (Figure 4).Four sex-and age-matched healthy controls have been used as frame of reference. | DISCUSSION Most clinical signs of ocular involvement in AD patients treated with dupilumab overlaps the AKC.In our case report ocular surface disease develops shortly after dupilumab treatment starts.Despite several topic therapies, ocular symptoms and signs do not improve with time.We decided to shift systemic therapy from dupilumab to upadacitinib because of patient's complaint and intolerance to ocular upset.Both dermatological and ophthalmological signs improve just after few weeks after starting the new biological drug, with rapid recovery of patient's symptoms. Increase in IL4 and IL13 and reduction in HLA-DR expression showed by molecular results on conjunctival samples entails the post-transductional action of dupilumab versus pre-inflammatory cascade inhibition of upadacitinib.An alternative inflammatory pathway may be involved in AKC outbreak if Th2 signalling is blocked by dupilumab.Upadacitinib reduces ocular inflammation, as demonstrated by HLADR expression, despite reactive IL4 and IL13 increase, by selective pretranscriptional inhibitory action on Janus Kinases. AKC-like disease may develop during dupilumab therapy as a consequence of alternative cytokines pathway activation (e.g.IL33) secondary to IL-4/13 pathway block.IL-33 plays important roles in atopic conditions, as recently supposed by Chiricozzi et al. 11 Atopic keratoconjunctivitis is a severe allergic condition characterised by inflammation affecting the entire ocular surface.Atopic keratoconjunctivitis may cause blindness through corneal neovascularisation and opacities as well as destruction of corneal epithelial stem cells, and cicatricial sequelae. 12However, despite strict therapy, such patients experience a critical reduction in their quality of life.Ocular discomfort, itching, and visual impairment limit their daily activities (driving, working, meeting friends) as well as their personal, social, and psychological development in such young patients group. Atopic keratoconjunctivitis is more common in AD patients with head-neck involvement. 13Discovering risk factors of AKC development during dupilumab treatment can help clinicians in daily practice. 14Target therapies in patients suffering from AD may prevent ocular and dermatological comorbidities improving quality of life before quality of skin and vision. By blocking pre-transcriptional pathway upadacitinib may avoid any positive or negative feedbacks at ocular surface as well as alternative pathways which may be building up in selective dupilumab IL-13/IL-4 antagonism. However, a definite conclusion needs a wider framework.Main study limitation of all case report involves the sample size. Future aim tends to understand the molecular basis for the pathogenesis of secondary AKC in a narrow group of patients. F I G U R E 1 Lids involvement in atopic dermatitis (AD).Cutaneous disease of head-neck AD patient during dupilumab treatment (on the left) and rapid and complete clinical resolution after 8 weeks upadacitinib treatment (on the right).F I G U R E 2 Head-neck involvement and residual disease on upper limbs during dupilumab treatment (a, c).Disappearance of clinical signs after 8 weeks of upadacitinib treatment (b, d).F I G U R E 3 Ocular involvement in atopic dermatitis (AD).The figure shows ocular signs of atopic keratoconjunctivitis (AKC) in a patientpreviously treated with dupilumab (on the left): conjunctival hyperaemia, mucous filaments, increased tear meniscus, tarsal papillary reaction.On the right, the improvement of hyperaemia and papillary reaction after 12 weeks of treatment with upadacitinib.
2024-03-17T15:20:24.341Z
2024-03-15T00:00:00.000
{ "year": 2024, "sha1": "00dda0bfb1a804be91198c52125930f83271d9f5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ski2.354", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "743079170c10cd66f10e7249580233b81fc0ed68", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117390506
pes2o/s2orc
v3-fos-license
A Distributed Demand Side Energy Management Algorithm for Smart Grid This paper proposes a model predictive control (MPC) framework-based distributed demand side energy management method (denoted as DMPC) for users and utilities in a smart grid. The users are equipped with renewable energy resources (RESs), energy storage system (ESSs) and different types of smart loads. With the proposed method, each user finds an optimal operation routine in response to the varying electricity prices according to his/her own preference individually, for example, the power reduction of flexible loads, the start time of shift-able loads, the operation power of schedulable loads, and the charge/discharge routine of the ESSs. Moreover, in the method a penalty term is used to avoid large fluctuation of the user’s operation routines in two consecutive iteration steps. In addition, unlike traditional energy management methods which neglect the forecast errors, the proposed DMPC method can adapt the operation routine to newly updated data. The DMPC is compared with a frequently used method, namely, a day-ahead programming-based method (denoted as DDA). Simulation results demonstrate the efficiency and flexibility of the DMPC over the DDA method. Introduction In recent years, the smart grid has undergone significant innovation and moved from conceptual to operational.It has been repeatedly demonstrated to provide more reliable, environment-friendly and economically efficient power systems [1]. With the development of advanced information and communication technologies and smart metering infrastructures, the two-way digital communication system now enables the utility company to share information, e.g., the time-dependent electricity price, with users.Moreover, the users can adopt related adjustment strategies to minimize their operation cost, and further to improve the overall energy system performance [2].This is referred to as demand-side management (DSM) [3,4], which has attracted more and more attention. Usually, candidate control actions that a user can perform include, for example, delaying the start time of shift-able loads, decreasing the demand of power flexible loads, modifying the power schedule of schedulable loads, ESS charging/discharging, selling extra electricity (produced by distributed wind and/or PV generators) to utility companies.By the DSM, a "win-win" situation between users and utility companies is expected [5]. 1. A novel DSM is proposed that considers minimizing the operation cost of each user as well as the discomfort caused by the change of load operation schedule.The DSM can also accommodate different types of user-preferences. 2. A distributed optimization algorithm based on games theory is proposed to coordinate the users' operation schedules to minimize their own operation costs.Meanwhile, an iteration mechanism is proposed to accelerate the convergence speed. 3. An MPC framework is implemented to integrate the user operation management model and distributed optimization algorithm.The MPC framework, featuring a rolling up and feedback mechanism, is shown to be able to handle the negative impacts caused by the forecast uncertainty of the RESs output and load demand. The rest of this paper is organized as follows.Section 2 reviews a selection of representative works that are related.Section 3 presents the model formulation of the smart grid, including the system overall model, the model of components such as the smart loads, the ESS, the pricing policy, and the operation optimization objectives for each user.Section 4 introduces the MPC-based distributed control scheme to coordinate the operations of all users.Numerical simulations are presented in Section 5. Section 6 concludes the paper. Literature Review There have been several studies investigating the DSM for smart grid.These methods have their own merits and disadvantages.Next, we review a selection of representative ones. Since large commercial or industrial centers usually have large energy consumption, Aalami, et al. [12] discussed the DSM for large users.Setlhaolo, et al. [13] further studied the optimal scheduling scheme for typical home appliances.Zhang et al. [14] investigated the energy management strategy of a smart home with the consideration of the demand response (DR).Erdinc [15] evaluated the economic impacts of the ESS, distributed generators (DG) and shift-able loads under different DR strategies.These works consider DSM for either large aggregators or a single user only.Also, discomfort cost as well as different user preferences, altogether, which has greatly limited the application of existing methods. System Model and Problem Formulation This study considers a smart grid consisting of a utility company and multiple (M) energy users as shown in Figure 1.For brevity, utility company is abbreviated as utility in the following sections.A DSM control center is deployed in the utility and an EMS is deployed on each user's side.The EMS has the following functions: (i) communicating with the smart meters in the user side and the DSM control center in utility; (ii) predicating the power generations of RESs and load demand; (iii) optimizing operation routines of the dispatchable units of the user based on forecasts, electricity price and so on.The control orders for dispatchable units will be implemented through the programmable logic controller (PLC) in the system.For each user, he/she cannot only purchase electricity energy from the utility but also sell extra power generation back to the utility to make revenue.Moreover, for users with an ESS unit, both the selling time and the amount of selling power can be optimized.This study considers a smart grid consisting of a utility company and multiple () energy users as shown in Figure 1.For brevity, utility company is abbreviated as utility in the following sections.A DSM control center is deployed in the utility and an EMS is deployed on each user's side.The EMS has the following functions: (i) communicating with the smart meters in the user side and the DSM control center in utility; (ii) predicating the power generations of RESs and load demand; (iii) optimizing operation routines of the dispatchable units of the user based on forecasts, electricity price and so on.The control orders for dispatchable units will be implemented through the programmable logic controller (PLC) in the system.For each user, he/she cannot only purchase electricity energy from the utility but also sell extra power generation back to the utility to make revenue.Moreover, for users with an ESS unit, both the selling time and the amount of selling power can be optimized. Model of Loads According to the feature of different loads, they are classified as base loads, power flexible loads, shift-able loads and schedulable loads [26,27].Therefore, the total load demand in period is computed as Equation (1).Also, the forecast error of load demand is assumed to fit the Gaussian distribution [28]. For based load, the operation time and power demand must be satisfied, and cannot be changed.The power demand cannot excess the defined capacity limit. For flexible loads, the power demand can be curtailed by users to save money or to use in emergency conditions.However, the curtailed power cannot exceed a certain range to avoid the dissatisfaction caused by adjustment actions.In the model, power flexible appliances are considered, denoted as For shift-able load, the start time can be delayed in a certain time window, but its operation power is fixed and cannot be adjusted.Also, once the task is started, it cannot be stopped before the Prior to describing the system model, it is assumed that the whole control and prediction horizon is divided into T time intervals.Duration of each time interval is ∆t, Model of Loads According to the feature of different loads, they are classified as base loads, power flexible loads, shift-able loads and schedulable loads [26,27].Therefore, the total load demand in period t is computed as Equation (1).Also, the forecast error of load demand is assumed to fit the Gaussian distribution [28]. For based load, the operation time and power demand must be satisfied, and cannot be changed.The power demand cannot excess the defined capacity limit. For flexible loads, the power demand can be curtailed by users to save money or to use in emergency conditions.However, the curtailed power cannot exceed a certain range to avoid the dissatisfaction caused by adjustment actions.In the model, F i power flexible appliances are considered, denoted as For shift-able load, the start time can be delayed in a certain time window, but its operation power is fixed and cannot be adjusted.Also, once the task is started, it cannot be stopped before the task is completed.In the model, A i power flexible appliances are considered, denoted as a ∈ For schedulable loads, both the start time and operation power can be adjusted if the total demand is satisfied before the deadline.However, once the load is started it should be operated till the task is completed.In this model, B i power flexible appliance is considered, denoted as T i,a +t Next, we model the discomfort cost.Specifically, the discomfort cost C i, f (t) by power reduction of power flexible loads f is modelled as a convex non-decreasing function, as shown in Equation (10). The discomfort cost C i,a (t) by the delay of the start time of shift-able load a is computed as in Equation (11).It indicates that as the delay start time increases, more discomfort penalty cost must be paid. where δ base i,a (t) is the user's baseline on/off status of appliance a in period t.The discomfort cost C i,b (t) caused by appliances, e.g., b in period t comes from two parts: the cost of power adjustment cost C where δ base i,b (t), l base i,b (t) denote the user's baseline on/off status and power demand of appliance a in period t, respectively. Model of DERs The application of distributed energy resources (DERs) including RESs and ESSs has attracted more and more attention due to the increase of energy demand and emissions of greenhouse gases. Despite the advantages of RESs, the randomness and fluctuation of the RESs power generation make their applications in smart grid difficult.To guarantee safe use of RESs, their power outputs are constrained within a certain range.0 ≤ P i,PV (t) 0 ≤ P i,wind (t) Similarly, the forecast errors of wind and PV generation is described by Gaussian distributions [29,30]. As the most flexible unit in smart grid, the ESS model mainly consists of the system dynamic model, energy capacity model, power charge/discharge model and operation status model. The initial energy level is denoted as . To effectively handle emergency conditions, the energy level of the ESS is reset to near the half energy capacity at the beginning of each day.Moreover, the operation and maintenance cost of ESS [31] is calculated as follows. Energy Price Model The base price p u (t) comes from the cost by thermal generators (we consider RESs all are deployed in the user side, and they are not the main power source) or a simple artificial cost tariff which is used by the utility to impose a proper DSM program. As the price function is non-decreasing convex, we set the time-dependent base price function to be proportional to the first order derivative of the time-dependent generation cost [32]. where P u (t) is the power generated by the utility, which equals to the summary of the load demands for all users. To promote the use of DERs as well as reduce the negative impacts of randomness of the RESs outputs, the selling electricity price p i,s (t) for users is set lower than the base price [31].Meanwhile, according to the rate-of-return regulations [33], the buying electricity price p i,b (t) for users is set higher than the base price. where The real-time electricity price mechanism defined in Equations ( 25) and ( 26) guarantees the user's benefits, that is, when the base price is high the selling price is also high.Meanwhile, the difference between the buying price and the selling price promote the users to consume the power generated by themselves, and to encourage the utility to buy energy from the users. Furthermore, to reduce the negative impacts introduced by the intermittent and random RESs outputs to the power system, users are not allowed to sell energy back to the utility if the forecasted RESs generation cannot supply the energy demand of the user completely. Without loss of the generality, we set the operation cost C u (P u (t)) as, where c 1 , c 2 are constants. Power Interaction Model Because of Equations ( 25) and ( 26), the buying/selling energy cost model for each user is not a continuous model, becoming a mixed logical dynamic (MLD) model [14]. The power interaction between users and utility is modelled as follows. 0 ≤ P i,gI (t) ≤ P max i,gI δ i,gI (t) (28) Equations ( 28)-(39) indicate that the buying and selling power of each user cannot exceed its capacity.Equation (30) indicates that a user cannot purchase and sell power at the same time. The power flow in each user is shown as follows. Cost Model The overall cost Ψ i (t) as shown in Equation (33) consists of energy billing cost and the discomfort cost.The billing cost includes the energy purchasing cost, the energy selling revenue and the ESS operation cost.The discomfort cost includes the penalty cost of curtailing flexible loads, the delaying penalty cost of shift-able loads start time, and the adjustment penalty cost of schedulable loads. Subject to (1)- (32) The cost Ψ u for the utility includes the fuel consumption cost and the buyback cost from the users, which is computed as follows. According to Equations ( 33) and ( 34), the cost optimization model for users is a mixed integer programming (MIP) problem, it is non-convex.The scheme of each user is affected by other users since the electricity price is determined by the total load demand of the smart grid.To solve the centralized optimization problem, all users' specifications and preferences are required, which is often difficult to obtain in practice, considering the privacy protection and computation burden.Meanwhile, each user hopes to minimize his/her own operation cost.This is difficult to achieve in a centralized way.Alternatively, a reasonable way is to allow each user to optimize his/her own operation schedule in a distributed way MPC Based Distributed User Energy Management Strategy This section first presents the distributed optimization algorithm used to solve the cost optimization model and verifies the equilibrium of the proposed distributed algorithm.Second, the detailed process of the MPC-based distributed optimization method is described. Distributed User Energy Scheduling Optimization Game theory is adopted to capture the competition among users.All users are considered to be rational entities that are only interested in minimizing their own cost.Specifically, the game theory-based method is described as follows. (i) Players: all users (M users) in the smart grid.(ii) Strategies: each user i ∈ M selects its strategy by scheduling the dispatchable units (smart loads and ESS) to minimize his/her own cost.(iii) Payoffs: the payoff P(y i , y − i ) for user i comprises two parts, see Equation ( 35): the actual operation cost Ψ k i described in Equation ( 33) at iteration k, and the penalty cost Φ k i caused by large fluctuant of the operation routine in two successive iterations. Please note that we use X k i to denote X i in the k th iteration, where X can represent all the decision variables and objective functions in this paper. Different from [21], the power varying of dispatchable units in two successive iterations is considered in this study.This is to avoid the situation that little difference for the power interaction but a large difference for the payoff in two successive iterations.Furthermore, Equation ( 36) can effectively cope with the power varying introduced by ESS.The detail distributed optimization process is shown in Table 1. Energies 2019, 12, 426 9 of 19 When the changes of the user's payoff and power routine within the preset threshold in two successive iterations, the distributed optimization will be stopped and the equilibrium is achieved. According to the distributed algorithm in Table 1, the user only needs to send its total power schedule to the utility control center and receive the electricity price over the control horizon.No communication is required among different users.Hence, the privacy of each user is protected effectively. Table 1.Algorithm for distributed optimization algorithm for the energy scheduling of users. Begin Initialize the iteration counter k = 0; Initialize P 0 i,gI (t), P 0 i,gO (t) according to the base load scheme and forecasts; Send P 0 i,gI (t), P 0 i,gO (t) to the utility control center; Repeat update the received buying and selling electricity price p i,b (t), p i,s (t) from the utility control center; minimize the payoff shown in Equation ( 35) and calculate the newly operation schedule for all dispatchable units.send newly buying/selling power schedule P k+1 i,gI (t), P k+1 i,gO (t) to the utility control center; where ∇C opt , ∇L sh f , ∇L sch , ∇P ESS , ∇P grid are the total cost difference, shift-able load power schedule difference, schedulable load power schedule difference, ESS power schedule difference and the buying/selling power schedule difference in two consecutive iterations, respectively. In the distributed optimization algorithm, λ k i is a user-defined parameter which affects the algorithm convergence.In this paper, λ k i is defined based on the following considerations.First, the change of operation routines becomes small as the algorithm progresses.Second, the greater the change, the higher the penalty cost.Third, a small penalty coefficient is used for large power consuming users while a large penalty coefficient is used for small power consuming users.This is because high consuming users tend to care more about the electricity price.Correspondingly, the change of their operations impacts more on the electricity price.Specifically, λ k i is computed as follows. The parameter π is an auxiliary coefficient to adjust the convergence speed of the distributed optimization algorithm, and is set to 0.5. Nash equilibrium refers to the best solution to a non-cooperative game, that is, no player can gain more credits by changing his/her own strategy only [34].In this paper, NE consists of a set of strategies y The game theory-based distributed user energy management strategy proposed in this paper satisfies the second fundamental welfare theorem [35] of the Walrasian equilibrium theory [36], this is because the base electricity price mechanism and game theory framework (p u , {P i } M i=1 ) meet the form of the Walrasian equilibrium theory.p u , P i are the base electricity price and power schedule vector, respectively, and are defined as follows. MPC-Based Control Framework In recent years, the MPC framework has attracted many attention in power system energy management [26,31] due to its feedback and rolling horizon mechanism being suitable to handle varying information and stochastic parameters. In this section, we formulate an MPC-based user energy management problem, whose solution yields a trajectory of control actions and states into the future that satisfies the dynamics and constraints of the user and smart grid operation while optimizing some given criteria.Only the first sample of the control actions is implemented, and then the horizon is shifted. The detailed coordination procedure for MPC-based distributed optimization is as follow: (i) At the end of period τ, the EMS of user i obtains the updated state of its related dispatchable units, including the energy level of ESS, E i,ESS (τ), the operation status of shift-able loads, δ i,a (τ), the operation status, δ i,b (τ) and power demand, l i,b (τ) of the schedulable loads.Then the EMS calculates the forecasted data of load demand, PV generation and wind production from period ).The first sample of the control sequence is then sent to local controllers.(iii) At the beginning of τ + 1, only the first sample of the control sequence is implemented.The insufficient power caused by forecast errors is compensated by the utility.On the contrary, the excess power will be sold back to the utility with a lower price.Finally, the EMS updates the parameters and forecast model with new data.(iv) Go to step (i) until the end of the simulation. The designed control variables for the MPC controller is P i .The forecast error is FE i which contains (2 + F i ) × T variables. It is expected that, by the MPC-based distributed optimization, an optimal plan can be obtained to potentially compensate the forecast errors. Simulation and Results In this section, we first compare our proposed MPC-based distributed DSM (DMPC) strategy with the traditional day-ahead programming-based distributed DSM (DDA) strategy [32].Then, the impact of the penalty cost term of the shift-able and schedulable loads of the system operation routine is discussed.Finally, we compare our parallel distributed optimization framework with the sequential distributed optimization framework [16]. Experiment Setup We consider a smart grid with four users.Each user has wind and PV generators, ESS unit and different kinds of smart loads.Related parameters like the capacity of PV and wind generators, the maximum base load demand and the maximum power interaction for each user is listed in Table 2.The control and prediction horizon is T = 24 h, the duration of each time interval is 1 h. The historical data of base load, wind and PV power generation for each user is collected and modified from Belgium's transmission system [37] as shown in Figure 2. The basic operation schedule is as follows.The shift-able loads and schedulable loads work as they are planed, no power is curtailed and no ESS is integrated.Since no specialized data set for power flexible loads, we only consider a total load demand for power flexible appliances, that is, F = 1. Also, we assume that the demand of flexible load is 30% of the base load in each period.The maximum curtailment ratios for the four users are preset as [0.5, 0.4, 0.35, 0.3], meanwhile, the discomfort penalty coefficients for curtailing the flexible loads are set as [2.5, 1.8, 2.2, 2]. There are 8 shift-able appliances for each user, the power demand, available operation time window, time needed to complete the task and the discomfort penalty coefficient are all listed in Table 3. Different discomfort penalty coefficients are set for different users since the user preferences are usually different.Similarly, the parameters for schedulable loads are shown in Table 4. Since the schedulable loads can adjust both the start time and operation power, two penalty cost coefficients are adopted. The parameters of the ESS are listed in Table 5.The operation and maintenance cost for each unit kWh is 0.1 $.The charging/discharging efficiency for each ESS is 0.95.The initial energy level for each ESS is half of its maximum energy level. According to the rate-of-return regulations, we set the retail purchasing electricity price for each user at 1.2 times of the base price, and the retail selling price for each user is 0.8 times the base price.Furthermore, we set the extra power purchasing price in the real-time power compensation stage at 3 times the base price and 0.5 times of the base price to sell the extra power generation, due to the fact that the real-time adjustment of the utility needs more cost than a prescheduled plan.The cost parameters for the utility is c 1 = 0.000066 $/kWh 2 , c 2 = 0.18 $/kWh.The capacity of the utility is 5 MW, and the minimum power output is 50 kW.If the total power demand of all users is less than 50 kW, the utility will only act as a servicer.In other words, the energy coordination should be performed among users themselves. The parameters for the stopping criteria of the distributed optimization algorithm in Table 1 is set as follows. Simulation Results All simulations are run on a laptop with Intel(R) Core (TM) i5-3210M CPU @2.5 GHz and 8.00 GB memory.The ILOG's CPLEX v.12 optimization solver is utilized for solving the MIP models, MATLAB 2013a and YALMIP toolbox [38] are used for linking the CPLEX solver and computing the optimization model. Results of the DMPC Strategy and DDA Strategy First, the DDA strategy is introduced briefly.It is a traditional two-stage-based open-loop distributed energy management algorithm [15,32], including the scheduling stage and the real-time power compensation stage.The detailed process is shown as follow.i).At the scheduling stage, the EMS determines the operation schedule of the smart load appliances and the ESS over the control horizon by implementing the distributed optimization algorithm of Table 1 at the beginning of the day with the forecasts of RESs generation and load demand. Simulation Results All simulations are run on a laptop with Intel(R) Core (TM) i5-3210M CPU @2.5 GHz and 8.00 GB memory.The ILOG's CPLEX v.12 optimization solver is utilized for solving the MIP models, MATLAB 2013a and YALMIP toolbox [38] are used for linking the CPLEX solver and computing the optimization model. Results of the DMPC Strategy and DDA Strategy First, the DDA strategy is introduced briefly.It is a traditional two-stage-based open-loop distributed energy management algorithm [15,32], including the scheduling stage and the real-time power compensation stage.The detailed process is shown as follow. (i) At the scheduling stage, the EMS determines the operation schedule of the smart load appliances and the ESS over the control horizon by implementing the distributed optimization algorithm of Table 1 at the beginning of the day with the forecasts of RESs generation and load demand.The control sequence sent to the controllers of all dispatchable units should be implemented strictly. (ii) At the real-time power compensation stage, for each user the insufficient power will be provided by the utility company at a higher price and the extra power generation will be sold at a lower price back to the utility. The operations for the DMPC and DDA at the real-time power compensation stage are the same, aiming to reduce the negative impacts introduced by forecast errors and randomness of the RESs output. The routines of power generation for the utility under the DMPC and DDA are shown in Figure 3. Please note that the basic power refers to the utility generation where no optimization is implemented and no ESS unit is used.The scheduling power refers to the utility generation based on the forecasts at the scheduling stage.The real-time power refers to the utility generation adjusted with the real-time data at the real-time power compensation stage. Energies 2017, 10, x FOR PEER REVIEW 13 of 20 The control sequence sent to the controllers of all dispatchable units should be implemented strictly.ii).At the real-time power compensation stage, for each user the insufficient power will be provided by the utility company at a higher price and the extra power generation will be sold at a lower price back to the utility. The operations for the DMPC and DDA at the real-time power compensation stage are the same, aiming to reduce the negative impacts introduced by forecast errors and randomness of the RESs output. The routines of power generation for the utility under the DMPC and DDA are shown in Figure 3. Please note that the basic power refers to the utility generation where no optimization is implemented and no ESS unit is used.The scheduling power refers to the utility generation based on the forecasts at the scheduling stage.The real-time power refers to the utility generation adjusted with the real-time data at the real-time power compensation stage.The peak of the basic power is 3.459 × 10 3 kW in the 10th hour of the first day, and its average value is 2.032 × 10 3 kW.Thus, the peak-to-average ratio is 1.702. The power peak of the DMPC strategy at the scheduling stage is 2.934 × 10 3 kW at the 11th hour of the first day, and its average power is 1.945 × 10 3 kW.Thus, the peak-to-average ratio is 1.510.Meanwhile, considering the forecast uncertainties, the real-time power generation of the utility at the real-time stage is different from that at the scheduling stage.The power peak of the DMPC strategy at real-time power compensation stage is 2.945 × 10 3 kW at the 11th hour of the first day, and its average power is 1.9455 × 10 3 kW, and hence its peak-to-average ratio is 1.514.Such results indicate that by the DMPC strategy, the impact caused by forecast uncertainties can be well handled.The peak-to-average ratio is reduced about 11.3% compared to the basic situation, and the peak power is reduced about 15% (more than 500 kW).This demonstrates that the DMPC strategy can effectively save social welfare for the society. The power peak of the DDA strategy at the scheduling stage is 3.292 × 10 3 kW at the 11th hour of the first day, and its average power is 2.018 × 10 3 kW.Thus, the peak-to-average ratio is 1.63.This shows that the performance of the DDA strategy at the scheduling stage is inferior to that of the DMPC strategy.Moreover, the power peak of the DDA strategy at the real-time power compensation stage is 3.307 × 10 3 kW at the 11th hour of the first day, and its average power is 2.029 × 10 3 kW.The The peak of the basic power is 3.459 × 10 3 kW in the 10th hour of the first day, and its average value is 2.032 × 10 3 kW.Thus, the peak-to-average ratio is 1.702. The power peak of the DMPC strategy at the scheduling stage is 2.934 × 10 3 kW at the 11th hour of the first day, and its average power is 1.945 × 10 3 kW.Thus, the peak-to-average ratio is 1.510.Meanwhile, considering the forecast uncertainties, the real-time power generation of the utility at the real-time stage is different from that at the scheduling stage.The power peak of the DMPC strategy at real-time power compensation stage is 2.945 × 10 3 kW at the 11th hour of the first day, and its average power is 1.9455 × 10 3 kW, and hence its peak-to-average ratio is 1.514.Such results indicate that by the DMPC strategy, the impact caused by forecast uncertainties can be well handled.The peak-to-average ratio is reduced about 11.3% compared to the basic situation, and the peak power is reduced about 15% (more than 500 kW).This demonstrates that the DMPC strategy can effectively save social welfare for the society. The power peak of the DDA strategy at the scheduling stage is 3.292 × 10 3 kW at the 11th hour of the first day, and its average power is 2.018 × 10 3 kW.Thus, the peak-to-average ratio is 1.63.This shows that the performance of the DDA strategy at the scheduling stage is inferior to that of the DMPC strategy.Moreover, the power peak of the DDA strategy at the real-time power compensation stage is 3.307 × 10 3 kW at the 11th hour of the first day, and its average power is 2.029 × 10 3 kW.The peak-to-average ratio is thus 1.63.Compared with the DMPC strategy, the DDA strategy is inferior, Energies 2019, 12, 426 reducing only 4.23% of the peak-to-average ratio, and 4.4% of the peak power (about 150 kW) for the utility company The reason for the inferior performance of DDA compared to the DMPC is that in DDA both the forecast and the operation schedule are made at the beginning of the day; however, the forecast and the operation schedule of the DMPC are adaptively adjusted every hour according to the newly updated information. Figure 4 shows the power adjustment at the real-time stage under the DMPC and DDA strategy.From the figure, we can see that the amount of power adjustment with the DMPC strategy is much smaller than that with the DDA strategy.That is, the DDA is inferior to the DMPC strategy. Energies 2017, 10, x FOR PEER REVIEW 14 of 20 peak-to-average ratio is thus 1.63.Compared with the DMPC strategy, the DDA strategy is inferior, reducing only 4.23% of the peak-to-average ratio, and 4.4% of the peak power (about 150 kW) for the utility company The reason for the inferior performance of DDA compared to the DMPC is that in DDA both the forecast and the operation schedule are made at the beginning of the day; however, the forecast and the operation schedule of the DMPC are adaptively adjusted every hour according to the newly updated information. Figure 4 shows the power adjustment at the real-time stage under the DMPC and DDA strategy.From the figure, we can see that the amount of power adjustment with the DMPC strategy is much smaller than that with the DDA strategy.That is, the DDA is inferior to the DMPC strategy.In addition to the comparison from the view of the utility, we next discuss the performance of the DMPC strategy from the view of users.The operation routines for each user and their dispatchable units with the DMPC strategy at the scheduling stage is illustrated in Figure 5. User 1 buys 4.27 × 10 4 kWh power energy from the utility while sells no power back to the utility company as power generated by RESs is always less than the load demand.User 1 spends 4.314 × 10 4 $ purchasing energy from the utility.The ESS charges 1.248 × 10 3 kWh and discharges 1.127 × 10 3 kWh power over the whole time horizon.The overall charged energy is a little larger than the discharged energy.This is because the power efficiency of the ESS is not 100%.The operation and maintenance cost of the ESS cost is 237.5 $.Meanwhile, user 1 curtails 237 kWh power flexible load demand, which results in a penalty cost of 533.8 $.The starting time of shift-able appliances delayed about 10 hours, resulting in 0.39 $ penalty cost.The amount of energy adjustment for schedulable loads is 2.63 × 10 3 kWh, resulting in 54.124 $ penalty cost.Also, no schedulable appliance chooses to shift its operation time window. User 2 buys 3.88 × 10 4 kWh power energy from the utility while sells no power back to the utility company.3.932 × 10 4 $ is spent to purchase energy from the utility.The ESS charges 1.239 × 10 3 kWh and discharges 1.117 ×10 3 kWh power over the horizon.The operation and maintenance cost of the ESS cost is 235.61 $.Meanwhile, user 2 curtails 88.01 kWh power flexible load demand, resulting in 121.26 $ penalty cost.Since the discomfort penalty cost of user 2 is lower than that of user 1, its penalty cost for curtailing loads is more effective than user 1.The shift-able appliances delay about 7 hours, and the penalty cost for the delay is 0.335 $.Like user 1, no schedulable appliance in user 2 chooses to shift its operation time window.The amount of energy adjustment for schedulable loads is 2.15 × 10 3 kWh, resulting in 47.76 $ penalty cost.In addition to the comparison from the view of the utility, we next discuss the performance of the DMPC strategy from the view of users.The operation routines for each user and their dispatchable units with the DMPC strategy at the scheduling stage is illustrated in Figure 5. User 1 buys 4.27 × 10 4 kWh power energy from the utility while sells no power back to the utility company as power generated by RESs is always less than the load demand.User 1 spends 4.314 × 10 4 $ purchasing energy from the utility.The ESS charges 1.248 × 10 3 kWh and discharges 1.127 × 10 3 kWh power over the whole time horizon.The overall charged energy is a little larger than the discharged energy.This is because the power efficiency of the ESS is not 100%.The operation and maintenance cost of the ESS cost is 237.5 $.Meanwhile, user 1 curtails 237 kWh power flexible load demand, which results in a penalty cost of 533.8 $.The starting time of shift-able appliances delayed about 10 h, resulting in 0.39 $ penalty cost.The amount of energy adjustment for schedulable loads is 2.63 × 10 3 kWh, resulting in 54.124 $ penalty cost.Also, no schedulable appliance chooses to shift its operation time window. User 2 buys 3.88 × 10 4 kWh power energy from the utility while sells no power back to the utility company.3.932 × 10 4 $ is spent to purchase energy from the utility.The ESS charges 1.239 × 10 3 kWh and discharges 1.117 ×10 3 kWh power over the horizon.The operation and maintenance cost of the ESS cost is 235.61 $.Meanwhile, user 2 curtails 88.01 kWh power flexible load demand, resulting in 121.26 $ penalty cost.Since the discomfort penalty cost of user 2 is lower than that of user 1, its penalty cost for curtailing loads is more effective than user 1.The shift-able appliances delay about 7 h, and the penalty cost for the delay is 0.335 $.Like user 1, no schedulable appliance in user 2 chooses to shift its operation time window.The amount of energy adjustment for schedulable loads is 2.15 × 10 3 kWh, resulting in 47.76 $ penalty cost. User 3 buys 5.0 × 10 4 kWh power energy from the utility, and sells 15 kWh power back to the utility company.The money spent to purchase energy from the utility is 5.191 × 10 4 $ while the revenue made by selling energy back to the utility is 4.2 $.The ESS charges 1.00 × 10 3 kWh and discharges 0.916 × 10 3 kWh power over the horizon.The operation and maintenance cost of the ESS is 191.89 $.Meanwhile, user 3 curtails 92.182 kWh power flexible load demand, resulting in 117.85 $ penalty cost.Compared with user 2, the power curtailment action for user 3 is more effective.This is because that both the penalty cost coefficient and the curtailed power of user 2 are lower than those of user 3 but the penalty cost of user 2 is higher than user 3. The shift-able appliances delay about 8 h, and the penalty cost is 0.265 $.Again, no schedulable appliance in user 3 chooses to shift the operation time window.The amount of energy adjustment for schedulable loads is 1.985 × 10 3 kWh, resulting in 45.125 $ penalty cost. User 4 buys 5.526 × 10 4 kWh power energy from the utility while sells no power back to the utility company.5.594 × 10 4 $ is spent to purchase energy from the utility.The ESS charges 0.887 × 10 3 kWh and discharges 0.7935 × 10 3 kWh power over the horizon.The operation and maintenance cost of the ESS is 191.89 $.User 4 curtails 90.51 kWh power flexible load demand, resulting in 135.578 $ penalty cost.The shift-able appliances delay about 14 h, resulting in 0.46 $ penalty cost.No schedulable appliance in user 4 chooses to shift its operation time window.The amount of energy adjustment for schedulable loads is 1.508 × 10 3 kWh, resulting in 31.98 $ penalty cost. Though the rolling horizon and feedback mechanism can reduce the negative impact introduced by the random and intermittent outputs of RESs to some extent, the forecast uncertainties cannot be completely due to the forecast model is imperfect.Therefore, user 1 must purchase 67.19 kWh energy from the utility at a high electricity price with a total cost 169.97$.Correspondingly, user 1 sells 55.48 kWh energy back to the utility at a low electricity price, making revenue of 22.67 $.User 2 must purchase 72.03 kWh energy from the utility at a total cost 182.14$.Correspondingly, user 2 sells 45.41 kWh energy back to the utility, making revenue of 18.3 $.Due to the penetration level of RESs output in user 3 is the highest, the influence of the forecast uncertainty is the highest.User 3 has to purchase more energy, i.e., 93.72 kWh from the utility with a total cost of 231.62 $.Correspondingly, user 3 sells 72.5 kWh energy back to the utility, making revenue of 27.83 $.User 4 purchases 71.53 kWh energy from the utility with a total cost of 175.8 $.Correspondingly, user 4 sells 78.66 kWh energy back to the utility, making revenue of 31.85 $. Energies 2017, 10, x FOR PEER REVIEW 15 of 20 User 3 buys 5.0 × 10 4 kWh power energy from the utility, and sells 15 kWh power back to the utility company.The money spent to purchase energy from the utility is 5.191 × 10 4 $ while the revenue made by selling energy back to the utility is 4.2 $.The ESS charges 1.00 × 10 3 kWh and discharges 0.916 × 10 3 kWh power over the horizon.The operation and maintenance cost of the ESS is 191.89 $.Meanwhile, user 3 curtails 92.182 kWh power flexible load demand, resulting in 117.85 $ penalty cost.Compared with user 2, the power curtailment action for user 3 is more effective.This is because that both the penalty cost coefficient and the curtailed power of user 2 are lower than those of user 3 but the penalty cost of user 2 is higher than user 3. The shift-able appliances delay about 8 hours, and the penalty cost is 0.265 $.Again, no schedulable appliance in user 3 chooses to shift the operation time window.The amount of energy adjustment for schedulable loads is 1.985 × 10 3 kWh, resulting in 45.125 $ penalty cost. User 4 buys 5.526 × 10 4 kWh power energy from the utility while sells no power back to the utility company.5.594 × 10 4 $ is spent to purchase energy from the utility.The ESS charges 0.887 × 10 3 kWh and discharges 0.7935 × 10 3 kWh power over the horizon.The operation and maintenance cost of the ESS is 191.89 $.User 4 curtails 90.51 kWh power flexible load demand, resulting in 135.578 $ penalty cost.The shift-able appliances delay about 14 hours, resulting in 0.46 $ penalty cost.No schedulable appliance in user 4 chooses to shift its operation time window.The amount of energy adjustment for schedulable loads is 1.508 × 10 3 kWh, resulting in 31.98 $ penalty cost. Though the rolling horizon and feedback mechanism can reduce the negative impact introduced by the random and intermittent outputs of RESs to some extent, the forecast uncertainties cannot be completely due to the forecast model is imperfect.Therefore, user 1 must purchase 67.19 kWh energy from the utility at a high electricity price with a total cost 169.97$.Correspondingly, user 1 sells 55.48 kWh energy back to the utility at a low electricity price, making revenue of 22.67 $.User 2 must purchase 72.03 kWh energy from the utility at a total cost 182.14$.Correspondingly, user 2 sells 45.41 kWh energy back to the utility, making revenue of 18.3 $.Due to the penetration level of RESs output in user 3 is the highest, the influence of the forecast uncertainty is the highest.User 3 has to purchase more energy, i.e., 93.72 kWh from the utility with a total cost of 231.62 $.Correspondingly, user 3 sells 72.5 kWh energy back to the utility, making revenue of 27.83 $.User 4 purchases 71.53 kWh energy from the utility with a total cost of 175.8 $.Correspondingly, user 4 sells 78.66 kWh energy back to the utility, making revenue of 31.85 $.The operation costs for the DPMC and DDA strategies at the scheduling and real-time power compensation stages are shown in Table 6.From the results, we can find that the operation cost of the DMPC and DDA is nearly the same at the scheduling stage while different at the real-time stage (the cost by the DDA is higher than that by the DMPC).This is because the forecasted electricity prices for the two strategies at the scheduling stage are similar.However, at the real-time stage, the closed-loop-based DMPC can adjust control actions with newly updated forecasts while the open-loop-based DDA cannot.This leads DDA to cost more.6 presents the utility power generation routine with and without considering the penalty cost term.We use the term no penalty to refer to the case that no penalty cost is considered.Under the no penalty case, the peak power is 2.898 × 10 3 kW, and the average power is 1.872 × 10 3 kW.The peak-to-average ratio is thus 1.548.Though the peak power and the average power are both lower than the case considering penalty cost, the peak-to-average for no penalty case is higher than penalty case.This clearly shows that penalty cost term has important impact on the utility. The operation costs for the DPMC and DDA strategies at the scheduling and real-time power compensation stages are shown in Table 6.From the results, we can find that the operation cost of the DMPC and DDA is nearly the same at the scheduling stage while different at the real-time stage (the cost by the DDA is higher than that by the DMPC).This is because the forecasted electricity prices for the two strategies at the scheduling stage are similar.However, at the real-time stage, the closedloop-based DMPC can adjust control actions with newly updated forecasts while the open-loopbased DDA cannot.This leads DDA to cost more.6 presents the utility power generation routine with and without considering the penalty cost term.We use the term no penalty to refer to the case that no penalty cost is considered.Under the no penalty case, the peak power is 2.898 × 10 3 kW, and the average power is 1.872 × 10 3 kW.The peakto-average ratio is thus 1.548.Though the peak power and the average power are both lower than the case considering penalty cost, the peak-to-average for no penalty case is higher than penalty case.This clearly shows that penalty cost term has important impact on the utility. Comparison of the Parallel and Sequential Optimization Algorithm This section compares the performance of the sequential distributed optimization (SDO) algorithm [16] with the parallel distributed optimization (PDO) algorithm described in Table 1.The users apply the SDO algorithm to update their operation schedule sequentially, namely, the electricity price has to change after the optimization scheduling.The first day's data is used to present the comparison results as shown in Figures 7 and 8. This section compares the performance of the sequential distributed optimization (SDO) algorithm [16] with the parallel distributed optimization (PDO) algorithm described in Table 1.The users apply the SDO algorithm to update their operation schedule sequentially, namely, the electricity price has to change after the optimization scheduling.The first day's data is used to present the comparison results as shown in Figures 7 and 8. Figure 7 shows that the SDO algorithm needs to run 11 iterations to achieve an equilibrium for a user.Since there are four users, so the total required iterations are 44.However, the PDO algorithm reaches an equilibrium after 12 iterations.This indicates that the PDO algorithm converges much faster than the SDO algorithm.Moreover, it can be observed from Figure 8 that the peak power of the PDO algorithm is only a bit higher than the SDO algorithm.This further demonstrates the advantage of the PDO over the SDO algorithm. Conclusions In this study, an MPC-based distributed demand side management strategy is proposed to provide optimal control actions for energy users in a smart grid.The users are equipped with renewable energy source (RES) generators, energy storage system (ESS) units and different kinds of smart loads.For each user, an energy management system (EMS) is used to determine the operation This section compares the performance of the sequential distributed optimization (SDO) algorithm [16] with the parallel distributed optimization (PDO) algorithm described in Table 1.The users apply the SDO algorithm to update their operation schedule sequentially, namely, the electricity price has to change after the optimization scheduling.The first day's data is used to present the comparison results as shown in Figures 7 and 8. Figure 7 shows that the SDO algorithm needs to run 11 iterations to achieve an equilibrium for a user.Since there are four users, so the total required iterations are 44.However, the PDO algorithm reaches an equilibrium after 12 iterations.This indicates that the PDO algorithm converges much faster than the SDO algorithm.Moreover, it can be observed from Figure 8 that the peak power of the PDO algorithm is only a bit higher than the SDO algorithm.This further demonstrates the advantage of the PDO over the SDO algorithm. Conclusions In this study, an MPC-based distributed demand side management strategy is proposed to provide optimal control actions for energy users in a smart grid.The users are equipped with renewable energy source (RES) generators, energy storage system (ESS) units and different kinds of smart loads.For each user, an energy management system (EMS) is used to determine the operation Figure 7 shows that the SDO algorithm needs to run 11 iterations to achieve an equilibrium for a user.Since there are four users, so the total required iterations are 44.However, the PDO algorithm reaches an equilibrium after 12 iterations.This indicates that the PDO algorithm converges much faster than the SDO algorithm.Moreover, it can be observed from Figure 8 that the peak power of the PDO algorithm is only a bit higher than the SDO algorithm.This further demonstrates the advantage of the PDO over the SDO algorithm. Conclusions In this study, an MPC-based distributed demand side management strategy is proposed to provide optimal control actions for energy users in a smart grid.The users are equipped with renewable energy source (RES) generators, energy storage system (ESS) units and different kinds of smart loads.For each user, an energy management system (EMS) is used to determine the operation scheme of dispatchable units and the interaction with the utility (i.e., purchasing power from or selling power back to the utility).Experimental results show that the proposed MPC-based distributed DSM (DMPC) strategy enables users to optimally control their own subsystems individually, and coordinate them properly when necessary.Moreover, its performance is demonstrated to be better than the traditional day-ahead programming-based distributed DSM (DDA) strategy [32].In addition, the proposed parallel distributed optimization method is also demonstrated to be superior to the sequential distributed optimization algorithm. With respect to the future work, we would like to analyze the games among the users and the utility.Also, the convergence property of the parallel distributed optimization method needs to be theatrically analyzed. Figure 1 . Figure 1.Power and information flows among users and the utility. Figure 1 . Figure 1.Power and information flows among users and the utility. po i,b (t), and the cost induced by the delay of start time C tm i,b (t). Figure 2 . Figure 2. PV and wind generation, base load demand and the basic buy/selling power for each user. Figure 2 . Figure 2. PV and wind generation, base load demand and the basic buy/selling power for each user. Figure 3 . Figure 3. Utility generation for DMPC and DDA strategies in different cases. Figure 3 . Figure 3. Utility generation for DMPC and DDA strategies in different cases. Figure 4 . Figure 4. Power adjusted for DMPC and DDA strategies at real-time stage. Figure 4 . Figure 4. Power adjusted for DMPC and DDA strategies at real-time stage. Figure 5 . Figure 5. Power routine for each user and each dispatchable units with DMPC strategy. Figure 5 . Figure 5. Power routine for each user and each dispatchable units with DMPC strategy. Figure 6 . Figure 6.Power generation of the utility with or without considering penalty cost terms. Figure 6 . Figure 6.Power generation of the utility with or without considering penalty cost terms. Figure 7 . Figure 7. Utility fuel cost and BESS power change for the PDO and SDO algorithms. Figure 8 . Figure 8. Utility power of the first day for PDO and SDO algorithms. Figure 7 . Figure 7. Utility fuel cost and BESS power change for the PDO and SDO algorithms. Figure 7 . Figure 7. Utility fuel cost and BESS power change for the PDO and SDO algorithms. Figure 8 . Figure 8. Utility power of the first day for PDO and SDO algorithms. Figure 8 . Figure 8. Utility power of the first day for PDO and SDO algorithms. Table 1 is solved individually for each user, reaching a Nash Equilibrium (NS), i.e., (p * u , P * i M i=1 Table 2 . Power parameters for each user. Table 3 . Parameter of shift-able loads. Table 5 . Parameters of ESS. Table 5 . Parameters of ESS. Table 6 . Operation costs for each user with different strategies. Table 6 . Operation costs for each user with different strategies.
2019-02-05T22:19:59.964Z
2019-01-29T00:00:00.000
{ "year": 2019, "sha1": "0b9e93c6af88a8837a7d20c1bee82f01817f2dbe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/12/3/426/pdf?version=1548841470", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0b9e93c6af88a8837a7d20c1bee82f01817f2dbe", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
210974159
pes2o/s2orc
v3-fos-license
Great saphenous vein stump: a risk factor for superficial/deep venous thrombosis and an indication for prophylactic anticoagulation? - a retrospective analysis ABSTRACT Background: Great saphenous vein (GSV) grafts are used for coronary artery bypass surgeries, but the remaining stump of the GSV may be the nidus for superficial and deep vein thrombosis. This study aims to determine the risk of thrombosis in the GSV stump in patients who developed lower extremity swelling following coronary artery bypass graft (CABG). Methods: We conducted a single-center retrospective analysis at Abington Jefferson Hospital of 100 patients who underwent CABG with GSV. Patients were monitored via follow-up for seven days for the development of saphenous vein thrombosis without any prophylactic anticoagulation for venous thrombosis. Risk factors including age, diabetes, hypertension, smoking, familial thrombophilia’s, family history of thrombosis, malignancy, and confounding factor-like early mobilization that may potentially alter the results were recorded. Results: The mean age of included patients was 70 years, and 65% of participants were men, 35% were women. Fourteen percent of the patients developed pain, swelling and edema in a leg where the graft was taken. We included patients aged >50 years with coronary artery disease who underwent CABG with SVG and developed lower extremity symptoms concerning for thrombosis. These patients underwent duplex ultrasound for possible GSV stump thrombosis. Any patients with coronary artery disease but no CABG or no lower extremity edema were excluded from the study. We found no saphenous vein thrombosis in the stump of the GSV in patients with clinical symptoms of thrombosis in their lower extremities based on duplex imaging. Conclusion: Based on our findings, the postoperative risk of developing thrombosis at the GSV stump and its extension to the deep veins is low and does not warrant prophylactic anticoagulation for venous thromboembolism. However, we recommend that further prospective studies with larger samples for an extended duration are warranted for better assessment of the risk of venous thrombosis in the GSV stump with minimal confounding factors. Introduction CABG is one of the more common cardiac surgeries performed. 100,000 to 200,000 patients per year receive CABG in the US [1]. A saphenous vein graft is often used in a CABG. The donor site of the graft can be an inciting event for superficial venous thrombosis (SVT) in legs status post-surgery [2,3]. Injury to a vein act can start the thrombotic process that can progress from SVT to deep venous thrombosis (DVT) [4]. It has been reported that up to 44% of patients with SVT go on to develop a DVT [5]. The concerning part of this is that up to one-third of the patients who develop SVT may go on to develop asymptomatic pulmonary embolism (PE), and up to 13% of these patients also develop symptomatic DVT [5]. The clinical symptoms concerning for thrombosis in the lower extremity, especially at the donor site of saphenous vein uptake for CABG is usually worked up with physical examination and imaging. Previous studies have tried to ascertain the risk of SVT in SVG after harvesting for CABG. However, there is still no convincing evidence linking SVT in the GSV stump following CABG, and data remain limited. This study was undertaken to determine the risk of thrombosis in the GSV stump after harvesting the vein for CABG while limiting the other secondary risk factors of thrombosis. Materials and methods We performed a single-center study at the Abington Memorial Hospital in Pennsylvania. A total of 100 patients who underwent CABG with GSV harvesting were included in this study. Included patients were monitored for up to seven days after the procedure to assess for the development of SV thrombosis. Patients with clinical symptoms like pain, swelling and edema, concerning for GSV thrombosis underwent duplex scanning to rule out thrombosis. We formulated a strict selection criterion and included patients who did not have any other risk factors for thrombosis. We excluded patients having possible other explanation of thrombosis such as pregnancy, acquired thrombophilia's, bone fractures, familial thrombophilia, malignancy, stasis, medications, previous thromboembolism, and immobility. All aspects that may potentially alter the results were taken into consideration. Our patient population did have age>50, diabetes, hypertension, smoking which can contribute to risk of thrombosis. We excluded patients having a previous history of thrombosis and that preoperative ultrasound of these patients in medical record did not show any thrombosis. Patients in our study were not immobile after CABG surgery and were started an aspirin/clopidogrel to prevent coronary artery restenosis. Results The mean age of the 100 patients included in this study was 70 years, including 65% male, and 35% female. Fourteen percent of study participants developed pain, swelling, and edema of their leg from the side where the graft was taken. These patients further underwent Doppler ultrasound to check for possible GSV stump thrombosis, although none were positive for GSV thrombosis based on duplex imaging. This is demonstrated in Figure 1 Discussion Coronary artery bypass grafting is a standard procedure for multiple vessel coronary artery disease. Conventionally cardiothoracic surgeons use saphenous vein grafts for coronary artery bypass grafting. The surgical technique of taking a graft includes: exposing the great saphenous vein by making an incision 3-5 cm proximal to the medial malleolus, ligating the side branches with dissection of the saphenous vein, cannulating the vessel, and closing the leg wound [6][7][8]. Postoperative CABG patients are sometimes found to have thrombosis of the leg veins. Thrombosis at the site of the remaining stump of the saphenous vein in the lower extremity can cause SVT that can later progress to DVT and PE [1]. Initial presenting symptoms of thrombosis in the leg after CABG can be pain, swelling and edema in the unilateral leg, although the majority of asymptomatic cases are possible to exist. The exact mechanism of increased thrombosis at the donor site of the vein is not fully understood; it can be due to stasis/immobilization and endothelial damage at the stump of the great saphenous vein. In a study done by Labropoulous et al. 2335 patients were studied after receiving CABG after GSV graft with heparin use perioperatively [2]. Furthermore, out of 2335 patients, 98 were found to have signs and symptoms concerning for venous thromboembolism in the lower extremity during hospitalization or after discharge. Out of 98 patients, 19 patients were found to have thrombosis. Out of these 19, five patients were excluded due to one patient having a protein-C deficiency, and four patients had a thrombosis in contralateral leg/thrombosis, not at the GSV stump. A total of 15 patients had a thrombosis at the site of GSV stump. Of these, two cases had superficial vein thrombosis at a site away from Sapheno-Femoral Junction (SFJ), and the rest had a thrombus in GSV and tributaries. The thrombus at GSV stump ranged 1-4 cm. The sample of cases with thrombus was so small, and statistically insignificant for comparison among subgroups of vein distribution in a study, as mentioned by Labropoulous. In our research, we intended to focus on 100 cases post-CABG, who underwent duplex scan for any concerns/symptoms of vein thrombosis in the leg at the GSV stump site. Among the study ). After controlling for other possible causes of thrombosis, we believe that there is a negligible risk of thrombosis in the stump. Our selection criteria best served the objective of the study at the expense of including only a small population but devoid of any other risk factors in a study population. Additionally, we believe that antiplatelet therapy is sufficient in post CABG patients and these patients do not require anticoagulation. Many surgeons prefer to put patients on DVT prophylaxis throughout hospitalization duration following CABG due to immobility after major surgery [9]. Patients who develop thrombosis can have underlying inherited or acquired thrombotic risk due to low anticlotting or increase clotting factors. This is also in fact mentioned as a case of protein-C deficiency, who developed SVT status post CABG in a study done by Labropoulous [2]. In another study done by Hanson et al. up to 35% of the patients with SVT have an underlying hypercoagulability [10]. Controversial literature exists for the prevention of thrombosis in leg veins after CABG and depends on the surgeon's practice. Most patients receive heparin anticoagulation pre, peri, and post procedure, and it can/cannot prevent the DVT occurrence. However, interestingly, up to 13% of the patients after CABG can develop thrombosis in the leg vein despite a maximum dose of heparin [1]. In the study by Labropoulous 15 patients developed GSV stump thrombosis despite heparin use perioperatively. In our study, we excluded patients with secondary risk factors for thrombosis such as pregnancy, acquired thrombophilia's, bone fractures, familial thrombophilia, malignancy, stasis, medications, previous thromboembolism, and immobility. Our population had no other risk factors for thrombosis other than post-operative risks due to immobilization and CABG surgery related stump thrombosis. The former was reduced by selecting patients who had early ambulation after the surgery and the later was the subject of assessment in this study. We hypothesized that patients who are status post CABG are at high risk of thrombosis as suggested by previous studies, the results in our study were different as we controlled the confounding factors. Our study participants didn't develop any thrombus in the GSV stump without heparin prophylaxis after CABG, as patients were started an aspirin/clopidogrel and had an early ambulation. The literature review also mentioned early postoperative swelling of the vein donor leg could be expected, is usually benign, self-resolving and doesn't always have underlying thrombus in most of the cases [11]. In patients with diagnosed cases of thrombus in the leg veins after GSV stump, the site of involvement is important in management. Thrombus below the knee is usually benign unless it progresses to the proximal veins/pulmonary embolism. In a study by Labropoulous et al., 5 out of 15 patients who had venous thrombus, developed PE symptoms, and underwent imaging for PE. Two patients had a high probability PE finding on a ventilation-perfusion scan [2]. As discussed by Lohr et al. symptoms, risk factors, and physical examinations are not a correct predictor of the common femoral vein extension as the majority of cases lacks positive signs and symptoms [12]. Another study by Murgia et al. resulted in that thrombus presence, and extent can be diagnosed by Duplex scanning with a 100% accuracy [13,14]. In our study, we also performed a Duplex scanning on patients with symptoms/signs of thrombosis at GSV stump site. After the diagnosis of confirmed thrombosis of the lower extremity, the primary methods of treatment of SVT/DVT graft site include anticoagulation and surgical intervention. The anticoagulation regimen includes heparin and warfarin. In a study by Labropoulous et al. 13 out of 15 patients with venous thrombus in the leg were treated with heparin and warfarin with INR monitoring [2]. Despite treatment with anticoagulation, two patients still had an extension of thrombus [2]. Surgical clipping/ligation can treat patients with limited response to anticoagulation and also prevent the extension of thrombosis from superficial to deep veins [10]. Surgical removal of SFJ also prevents DVT progression and PE. Another study by Lohr et al. reported surgical intervention is effective if the venous thrombus is within 3 cm of SFJ [12]. Studies by Lofgren et al., and Pulliam et al., reported that surgical intervention is effective in the prevention of progression of thrombus [15,16]. Thrombosis in the GSV stump after CABG is not well linked to GSV site. The thrombotic risk of these patients is not high at GSV site status post CABG. The use of pre, peri, and postoperative use of heparin is controversial. In our study, we didn't use prophylactic anticoagulation for venous thromboembolism, and we didn't find to have any thrombus in cases presented with lower extremity edema status post CABG. The lower extremity edema in the leg of the GSV stump can be a normal finding, but imaging should be done to rule out thrombus. The main strength of this study is that it is the second study that discusses these associations. Conclusion The association of SVT in the SVG stump following CABG is theoretical. Our study failed to show any thrombus in the lower extremity in patients off anticoagulation with clinical symptoms of thrombus seven days after CABG. To date, there is insufficient evidence to recommend for or against routine DVT prophylaxis after CABG to prevent SVT in the SVG stump and its extension to DVTs. In the future, large randomized clinical trials would be helpful to develop causation of diagnosed stump thrombosis and to develop guidelines of venous thromboembolism prophylaxis after CABG. Authors contributions Yasir Khan, develop the research strategy and did data collection Muhammad Arslan Cheema coordinated the data collection Ammar Abdullah did the statistical analysis and helped in manuscript Yasar Sattar re-wrote the revised manuscript, edited images and proof reading. Shujaul Haq helped in reference arrangement and data mining. Asoka Balaratna did the critical review Waqas Ullah wrote the initial manuscript Disclosure statement No potential conflict of interest was reported by the authors.
2019-12-19T09:15:43.098Z
2019-11-02T00:00:00.000
{ "year": 2019, "sha1": "2a8b50eb575ad05a3772fd0e6ab35b15af5ed154", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20009666.2019.1655626?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "330d69d6e61f2900388ab951e5e27a29f2f4a121", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239471106
pes2o/s2orc
v3-fos-license
Simultaneous Determination of Ergot Alkaloids in Swine and Dairy Feeds Using Ultra High-Performance Liquid Chromatography-Tandem Mass Spectrometry Ergot alkaloids (EAs) are mycotoxins mainly produced by the fungus Claviceps purpurea. EAs are known to affect the nervous system and to be vasoconstrictors in humans and animals. This work presents recent advances in swine and dairy feeds regarding 11 major EAs, namely ergometrine, ergosine, ergotamine, ergocornine, ergocryptine, ergocristine, ergosinine, ergotaminine, ergocorninine, ergocryptinine, and ergocristinine. A reliable, sensitive, and accurate multiple mycotoxin method, based on extraction with a Mycosep 150 multifunctional column prior to analysis using UHPLC-MS/MS, was validated using samples of swine feed (100) and dairy feed (100) for the 11 targeted EAs. Based on the obtained validation results, this method showed good performance recovery and inter-day and intra-day precision that are in accordance with standard criteria to ensure reliable occurrence data on EA contaminants. More than 49% of the swine feed samples were contaminated with EAs, especially ergocryptine(-ine) (40%) and ergosine (-ine) and ergotamine (-ine) (37%). However, many of the 11 EAs were not detectable in any swine feed samples. In addition, there were contaminated (positive) dairy feed samples, especially for ergocryptine (-ine) (50%), ergosine (-ine) (48%), ergotamine (-ine), and ergocristine (-ine) (49%). The mycotoxin levels in the feed samples in this study almost complied with the European Union regulations. Introduction Mycotoxins are hazardous chemicals produced by Aspergillus, Fusarium Penicillium, and Claviceps genus. Mycotoxins can contaminate foods and feeds and agricultural products [1]. To date, there are more than four hundred mycotoxins with different toxicity, which have been identified in cereals, fruits, vegetables, and other agricultural commodities, resulting in potential adverse effects on human and animal health, and economic losses [2][3][4]. Moreover, mycotoxins are persistent in food and feeds and not completely eliminated during processing operations [3]. Recently, mycotoxins were a major category in border rejection in the European Union (EU) according to the annual report of the Rapid Alert System for Food and Feed (RASFF) [3]. The Food and Agriculture Organization (FAO) suggested one-fourth of global food crops is contaminated by mycotoxins [5]. Because of their pathogenicity and lethality, worldwide authorities including the World Health Organization (WHO) have called to monitor mycotoxins in foodstuff and feeds and set up strict maximum levels and legislation, in order to provide an early warning about mycotoxin contamination and reduce the national and international losses. In addition, the impact of climate change on Calviceps spp. infection of crops could result in a potential to increase the higher food safety risks for humans and animals due to mycotoxin contamination in the end products [6]. Ergot alkaloids (EAs) are toxic secondary metabolites produced by fungi of the Claviceps genus, mainly by the parasitic fungus Claviceps purpurea, which parasitize the seed heads of living plants at the time of flowering [7]. EAs are known to cause adverse health effects in humans and animals and have been found in cereals, cereal products, barley, oats, and both rye-and wheat-containing foods [8][9][10][11]. Outbreaks of ergotism in livestock do still occur, and EAs can induce abortion by its toxicity [12]. Pigs and cattle have shown symptoms after being infected with EAs, causing financial problems to both breeders and the meat industry [12,13]. Animals, including pigs exposed to EAs from grains, can cause liver and intestinal alterations [14]. In Directive 2002/32/EC on undesirable substances in animal feed and its amendment, the maximum content of rye ergot (Claviceps purpurea) in feed containing unground cereals has been established at 1000 mg/kg. EAs have been reported in cereals in European countries, Canada, the United States, and China [15][16][17]. There have also been some reports on the presence of EAs in feed from other countries, with 86-100% of EAs detected in feed samples from Germany [18] and 83% of compound feeds containing EAs with an average concentration of 89 µg/kg and a maximum concentration of 1231 µg/kg in the Netherlands [19]. The main ergot alkaloids produced by Claviceps species are ergometrine, ergotamine, ergosine, ergocristine, ergokryptine, and ergocornine, and the group of agroclavines [20]. Ergotamine and ergosine are heat stable whereas ergocristine, ergokryptine, ergocornine, and ergometrine are decreased by heating [21]. The conversion of ergopeptines to ergopeptinines was accelerated either by acidic or alkaline solutions. However, ergopeptinines can also be transformed to ergopeptines in organic solvents [7,22]. Studies have developed reliable analytical methods of EAs in agricultural commodities [12,15,17,19,[22][23][24][25][26], mainly using HPLC-MS/MS. However, the challenge remains in the UHPLC-MS/MS method of optimizing the sample preparation procedure. However, signal suppression and enhancement usually occur due to the interferences in the matrix (matrix effect), leading to unreliable results [25]. To compensate for the matrix effect, some methods developed for the analysis of EAs in agricultural commodities have used a MycoSep ® multifunctional column [2]. To the best of the authors' knowledge, to date, there have been a few reports on contaminations of EAs in any kinds of foodstuffs and feeds in Thailand. The current study investigated the occurrence of 11 EAs in swine and dairy feeds using a validated UHPLC-MS/MS with a multifunctional SPE column procedure. We used an SPE column for sample cleanup. Under optimization, the limit of detection, limit of quantification and linearity were studied. Accuracy and precision were evaluated as well. This work provides a promising manner to monitor EAs in feed samples. Method Validation The results of the limit of detection, limit of quantification, and linearity are reported in Table 1. From this study, the method produced good linearity. Over the relevant working range, the calibration curve showed good linearity with the r 2 value higher than 0.995. The LOD value was 0.25 ng/g, and the LOQ was 0.5 ng/g ( Table 1). The recovery and precision values were 70-120%, and the % relative standard deviation (RSD) values were less than 20% [27] for all 11 ergot alkaloids, as summarized in Tables 2 and 3 for the swine and dairy feeds, respectively. For identification requirements, the relative ion ratio from sample extracts was lower than 30% for all 11 ergot alkaloids [27]. Matrix Effect Study The study used % signal suppression/enhancement (SSE) to evaluate the matrix effects in the two types of feed matrices. If the suppression or enhancement were marginal, the %SSE would be very close to 100%; if there was strong suppression or enhancement, the %SSE would deviate from 100%. In the swine feed samples, the %SSE (94.5-106.7%) was within the acceptable range (80-120%SSE), except for ergometrine, which exhibited strong signal suppression with its %SSE (75.1%) below the acceptable range. In the dairy feed samples, the %SSE for signal suppression for the 11 ergot alkaloids was within the acceptable range 83.8-98.1%, except for ergotamine and ergometrine, which exhibited strong signal suppression (%SSE 79.6% and 44.5%, respectively). The %SSE values of the two types of feed matrices are summarized in Figure 1. For all the results of the matrix effect, the quantification of the 11 ergot alkaloids using matrix-matched calibration is necessary. The extract ion chromatograms (XIC) of spiked 11 EAs in swine and dairy feed samples were illustrated in Figures 2 and 3 would deviate from 100%. In the swine feed samples, the %SSE (94.5-106.7%) was within the acceptable range (80-120%SSE), except for ergometrine, which exhibited strong signal suppression with its %SSE (75.1%) below the acceptable range. In the dairy feed samples, the %SSE for signal suppression for the 11 ergot alkaloids was within the acceptable range 83.8-98.1%, except for ergotamine and ergometrine, which exhibited strong signal suppression (%SSE 79.6% and 44.5%, respectively). The %SSE values of the two types of feed matrices are summarized in Figure 1. For all the results of the matrix effect, the quantification of the 11 ergot alkaloids using matrix-matched calibration is necessary. The extract ion chromatograms (XIC) of spiked 11 EAs in swine and dairy feed samples were illustrated in Figure 2 and Figure 3, respectively. would deviate from 100%. In the swine feed samples, the %SSE (94.5-106.7%) was within the acceptable range (80-120%SSE), except for ergometrine, which exhibited strong signal suppression with its %SSE (75.1%) below the acceptable range. In the dairy feed samples, the %SSE for signal suppression for the 11 ergot alkaloids was within the acceptable range 83.8-98.1%, except for ergotamine and ergometrine, which exhibited strong signal suppression (%SSE 79.6% and 44.5%, respectively). The %SSE values of the two types of feed matrices are summarized in Figure 1. For all the results of the matrix effect, the quantification of the 11 ergot alkaloids using matrix-matched calibration is necessary. The extract ion chromatograms (XIC) of spiked 11 EAs in swine and dairy feed samples were illustrated in Figure 2 and Figure 3, respectively. Occurrence of EAs in Swine and Dairy Feeds The method derived from this study was applied to explore the 11 ergot alkaloids in 200 feed samples consisting of swine (n = 100) and dairy feeds (n = 100). In the swine feed samples, more than 49% were contaminated with ergot alkaloids, especially ergocryptine (ine) (40%), ergosine (-ine), and ergotamine (-ine) (37%). However, more than 50% of total samples were not detectable in 11 ergot alkaloids in the swine feed sample. The dairy feed samples had the same prevalent contaminants as the swine feed samples but with higher positive samples, especially for ergocryptine (-ine) (50%), ergosine (-ine) (48%), ergotamine (-ine), and ergocristine (-ine) (49%), as shown in Table 4 and Table 5. The mycotoxin levels in all feed samples almost complied with the EU regulation (≤1,000 mg/kg of 11 ergot alkaloids) [28]. There are several reports on the presence of EAs in the feed from different countries, with 86-100% of listed EAs detected in feed samples from Germany [1] and 83% of compound feeds containing EAs with an average concentration of 89 μg/kg and a maximum concentration of 1,231 μg/kg in the Netherlands [19]. The major detected EAs were ergosine, ergotamine, ergocristine, and ergocryptine. Interestingly, Malysheva et al. [13] reported the occurrence of EAs over three years in 1065 cereal samples originating from 13 European countries, with 52% of rye, 27% of wheat, and 44% of total samples containing EAs (ergosine, ergocristine, and ergocryptine) ranging from less than 1 to 12,340 μg/kg. In Spain, the concentrations for individual ergot alkaloids ranged between 5.9 μg/kg for ergosinine to 145.3 μg/kg for ergometrine, while the total ergot alkaloid content ranged from 5.9 to 158.7 μg/kg in swine samples. About 12.7% revealed contamination by at least one ergot alkaloid, and among contaminated swine samples, 65% were contaminated by more than one [22]. The ergot contaminations and patterns were differences due to the geographical region and environmental conditions [10]. Occurrence of EAs in Swine and Dairy Feeds The method derived from this study was applied to explore the 11 ergot alkaloids in 200 feed samples consisting of swine (n = 100) and dairy feeds (n = 100). In the swine feed samples, more than 49% were contaminated with ergot alkaloids, especially ergocryptine (-ine) (40%), ergosine (-ine), and ergotamine (-ine) (37%). However, more than 50% of total samples were not detectable in 11 ergot alkaloids in the swine feed sample. The dairy feed samples had the same prevalent contaminants as the swine feed samples but with higher positive samples, especially for ergocryptine (-ine) (50%), ergosine (-ine) (48%), ergotamine (-ine), and ergocristine (-ine) (49%), as shown in Tables 4 and 5. The mycotoxin levels in all feed samples almost complied with the EU regulation (≤1000 mg/kg of 11 ergot alkaloids) [28]. There are several reports on the presence of EAs in the feed from different countries, with 86-100% of listed EAs detected in feed samples from Germany [1] and 83% of compound feeds containing EAs with an average concentration of 89 µg/kg and a maximum concentration of 1231 µg/kg in the Netherlands [19]. The major detected EAs were ergosine, ergotamine, ergocristine, and ergocryptine. Interestingly, Malysheva et al. [13] reported the occurrence of EAs over three years in 1065 cereal samples originating from 13 European countries, with 52% of rye, 27% of wheat, and 44% of total samples containing EAs (ergosine, ergocristine, and ergocryptine) ranging from less than 1 to 12,340 µg/kg. In Spain, the concentrations for individual ergot alkaloids ranged between 5.9 µg/kg for ergosinine to 145.3 µg/kg for ergometrine, while the total ergot alkaloid content ranged from 5.9 to 158.7 µg/kg in swine samples. About 12.7% revealed contamination by at least one ergot alkaloid, and among contaminated swine samples, 65% were contaminated by more than one [22]. The ergot contaminations and patterns were differences due to the geographical region and environmental conditions [10]. Conclusions EAs are hazardous mycotoxins in food and feed samples. Our results showed that the LC-ESI-MS/MS technique was an excellent tool for untargeted determination of 11 EAs in swine and dairy feed samples. The validated LC-MS/MS method using a multifunctional column was successfully performed according to the SANTE/11813/2017 standard. LODs and LOQs were recorded as 0.25 and 0.5 ng/g for EAs. Recoveries were 90.6-120%. When this technique was applied to real feed samples, it showed that 11 EAs were quantifiable in animal feeds. The mycotoxin levels in the swine and dairy samples almost complied with the EU regulations. The presence of ergot sclerotia is regulated to a maximum of 500 mg/kg in unprocessed cereal for humans [29] and 1000 mg/kg in feed materials and compound feed containing unground cereals [30]. However, further studies with a larger sample size are needed to confirm these as acceptable levels. The knowledge of toxigenic Claviceps species for better understanding of the production of EAs and to progress appropriate solutions for disease management should be investigated. Reagents and Materials The LC-MS/MS grade reagents, consisting of ammonium carbonate and acetonitrile (MeCN), were purchased from Fluka (St. Louis, MO, USA). The Mycosep 150 multifunctional column for extraction clean-up was purchased from Romer Labs (Tulln, Austria). Deionized water was produced using a Milli-Q system (Millipore; Bedford, MA, USA). Preparation of Standards Solution The analytical standard ergot alkaloid stock solutions were prepared in acetonitrile to provide a working standard solution of 100 µg/mL concentration for ergometrine, ergosine, ergotamine ergocryptine, ergocristine, and ergocornine and 25 µg/mL for ergosinine, ergotaminine, ergocryptinine, ergocristinine, and ergocorninine. For method validation of the spiking experiments, working standard solutions were freshly prepared at 1.0 µg/mL and were stored in amber vials at −20 • C for one week. Sample Collection A total of 200 feed samples consisting of swine feed (n = 100) and dairy feed (n = 100) were randomly collected from animal farms in different regions of Thailand. All samples were ground in a rotor mill ZM200 (Retsh GmbH, Hann, Germany) into a fine powder (0.50 mm) and stored at −20 • C before analysis. Sample Preparation The sample preparation protocol applied was developed based on Krska et al. [10]. Briefly, 5 g of homogenized feed sample was weighed into a 50 mL polypropylene (PP) centrifugation tube, followed by the addition of 25 mL of acetonitrile-ammonium carbonate buffer (3.03 mM), 84:16 (v/v). The tube was closed and shaken using a laboratory shaker (IKA Labortechnik; Staufen, Germany) for 30 min at 240 rpm. The extract was passed through Whatman No. 4 filter paper, and 4 mL of the extract was transferred to the Mycosep 150 multifunctional column (Romers lab, Tulln, Austria). Then, 1 mL of the purified extract was evaporated to dryness at 40 • C. The residue was reconstituted in 500 µL 50% mobile phase, and the mixture was passed through a 0.22 µm nylon filter before being used in the LC-MS/MS analysis. UHPLC-MS/MS Analysis The 11 target ergot alkaloids were analyzed using the UHPLC-MS/MS method. Chromatographic separation was developed according to Krska et al. [10]. The analysis used a Shimadzu LC-MS 8060 system (Shimadzu, Tokyo, Japan) that was equipped with a Gemini analytical column (150 × 2.0 mm i.d., 5.0 µm particle size; Phenomenex; Torrance, CA, USA) maintained at 30 • C. The mobile phase for analyses used 3.03 mM ammonium carbonate in deionized water (A) and MeCN (B) in ESI (+). The gradient elution was identical initially. The proportion of B was immediately increased from 5% to 17% within 1 min and further linearly increased to 47%, 54%, and 80% after 2, 10, and 15 min, respectively. Subsequently, the proportion of B was decreased to the initial conditions (5%) over 1 min, followed by a holdtime of 5 min, resulting in a total run-time of 21 min. The flow rate was stable at 0.5 mL/min throughout the run; 10 µL of sample extract was injected into the LC-MS/MS system. The Shimadzu LC-MS 8060 system (Shimadzu, Japan) was equipped with an electrospray (ESI) ion source operated in positive mode. The ion source parameters were a nebulizing gas flow of 3 L/min, a heating gas flow of 10 l/min with an interface temperature: 300 • C, a CDL temperature of 250 • C, a heating block temperature of 400 • C, and a drying gas flow of 10 L/min. The dwell time (ms), Q1 Pre Bias (V), CE (V), and Q3 Pre Bias (V) were optimized during infusion of individual analytes (100 ng/mL) using automatic infusion. The MRM transitions of 11 ergot alkaloid-dependent parameters are summarized in Table 6. Method Validation Procedure The method performance characteristic parameters was determined to assess the efficiency of analytical method from this study by evaluating the linearity, accuracy, precision, LOD, and LOQ for EA contamination in swine and dairy feed samples. The analytes were quantified using a matrix-matched calibration standard with a pre spiking calibration curve for the 11 EAs for levels in the range 0.5-100.0 ng/g. The accuracy and precision (%RSD) were determined within the day by analyzing five replicates at three levels. The inter-day precision was determined at the same level as the within-day precision on three different days (n = 15). LODs and LOQs were calculated by analyzing the spiked samples at low level concentrations. LODs were determined as the lowest concentration of the analyte for which a signal-to-noise (S/N) ratio was 3:1, whereas S/N ratio was 10:1 for LOQs. Matrix Effects Study The matrix effects of the method were evaluated within two types of feed matrices: swine and dairy feed. Matrix-matched calibration curves were prepared at seven levels in the range 0.5-100.0 ng/g (n = 3 per each concentration). The matrix effects expressing the matrix-induced SSE% were defined as percentage ratios of the matrix-matched calibration slope to the solvent calibration slope. Therefore, the matrix-matched calibration curves were used for quantitative analysis.
2021-10-16T15:18:41.923Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "2a281523cf910d84f254e7a99e6f629a386e725b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/13/10/724/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb62e7e74ebc078284f971eb0b75e180c1bc64ec", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
271626634
pes2o/s2orc
v3-fos-license
Impact of Magnesium Supplementation on Blood Pressure: An Umbrella Meta-Analysis of Randomized Controlled Trials Background and aim Conflicting results on the effect of magnesium supplementation on blood pressure have been published in previous meta-analyses; hence, we conducted this umbrella meta-analysis of RCTs to provide a more robust conclusion on its effects. Methods Four databases including PubMed, Scopus, EMBASE, and Web of Science were searched to find pertinent papers published on international scientific from inception up to July 15, 2024. We utilized STATA version 17.0 to carry out all statistical analyses (Stata Corporation, College Station, TX, US). The random effects model was used to calculate the overall effect size ES and CI. Findings Ten eligible review papers with 8610 participants studied the influence of magnesium on SBP and DBP. The pooling of their effect sizes resulted in a significant reduction of SBP (ES = -1.25 mmHg; 95% CI: -1.98, -0.51, P = 0.001) and DBP (ES = -1.40 mmHg; 95% CI: -2.04, -0.75, P = 0.000) by magnesium supplementation. In subgroup analysis, a significant reduction in SBP and DBP was observed in magnesium intervention with dosage ≥400 mg/day (ES for SBP = -6.38 mmHg; ES for DBP = -3.71mmHg), as well as in studies with a treatment duration of ≥12 weeks (ES for SBP = -0.42 mmHg; ES for DBP = -0.45 mmHg). Implications The findings of the present umbrella meta-analysis showed an overall decrease of SBP and DBP with magnesium supplementation, particularly at doses of ≥400 mg/day for ≥12 weeks. Introduction Systemic hypertension, a persistent elevation of the systemic arterial blood pressure (BP), is a highly prevalent condition and a major independent risk factor of mortality and cardiovascular disease. 1 Preventing and treating hypertension has become a significant factor in decreasing the risk and burden of various diseases, thus reducing disease-related mortality. 2 , 3However, inadequate management of BP still remains one of the greatest individual risk factors of all-cause mortality globally, 4 and each 10 mmHg rise in average systolic blood pressure (SBP) has been previously associated with an increase in cardiovascular disease (CVD) and chronic kidney disease risk by up to 16%. 5 Dietary and lifestyle modifications play major role in managing BP. 6 , 7 For this reason, the pressure-lowering effect of natural supplements has been widely studied, and beneficial effects with minimal adverse effects have been discovered for many substances. 3agnesium is the fourth most common cation in the human body, 8 and a deficient intake of magnesium has been associated with various diseases, including asthma, diabetes mellitus, hypertension, stroke, heart disease, hypertension, and even cancer. 9 , 10herefore, magnesium has been proposed as a treatment for hypertension. 11By inducing the formation of nitric oxide and prostacyclin, 12 magnesium helps in modulating vasodilation, decreasing vascular tone and vascular reactivity. 13Magnesium also possess anti-inflammatory and as antioxidant properties 14 and interacts with calcium, 12 decreasing peripheral vascular resistance 15 and decreasing blood pressure. 16bservational epidemiological studies have reported a negative association between dietary magnesium supplementation and BP, 17 and various clinical trials have been conducted in the past years to study the effects of magnesium on BP, with inconsistent results published. 16Even systematic reviews conducted on RCTs provided inconclusive results on the effects of magnesium on SBP and DBP.For instance, one meta-analysis reported a significant reduction in DBP and a nonsignificant reduction in SBP, 18 while another metaanalysis reported that magnesium supplementation resulted in significant reduction of SBP and DBP, 19 and a third meta-analysis reported only a slight decrease in BP. 20 In patients with type 2 diabetes mellitus a meta-analysis reported beneficial effect of magnesium on BP, 21 while a second one showed a favorable effect on SBP but not on DBP. 22onflicting results were obtained from various studies and hence we conducted this umbrella meta-analysis of RCTs to provide clear evidence and conclusion on the effect of magnesium supplementation on blood pressure. Methods This study was implemented based on the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. 23 Inclusion and exclusion criteria We included articles in the present umbrella meta-analysis according to PICO criteria: Population/Patients (P: subjects treated with magnesium); Intervention (I: magnesium); Comparison (C: control or placebo group); and Outcome (O: SBP and DBP).Metaanalysis articles examining the effects of magnesium on blood pressure (SBP and DBP) in humans with reported effect sizes (ESs) and confidence intervals (CI), were included in the current umbrella meta-analysis.Moreover, observational studies, case reports, controlled clinical trials, prospective studies, studies with a "low quality" score, and articles in languages other than English were excluded. Methodological quality assessment and grading of the evidence Two independent researchers utilized the A Measurement Tool to Assess Systematic Reviews (AMSTAR)2 questionnaire to evaluate the methodological quality of eligible meta-analyses. 24This tool contains 16 items that require referees to answer "Yes," "Partial Yes," "No," or "No Meta-analysis."The AMSTAR2 list was categorized into "high quality," "moderate quality," "low quality," and "critically low quality."We appraised the general strength and quality of evidence using GRADE based on the Cochrane Handbook of systematic reviews of interventions. 25 Study selection and data extraction Two independent investigators reviewed the papers to select those fulfilling the eligibility criteria and discrepancy was resolved by the corresponding author.The following items were extracted from the included articles: year of publication, first author's name, study location, sample size, magnesium supplementation dosage, and effect sizes and CIs for SBP and DBP. Data synthesis and statistical analysis We utilized STATA version 17.0 to carry out all statistical analyses (Stata Corporation, College Station, TX, US).To calculate the overall ES and Cis, the random-effects model was used.Heterogeneity among studies was assessed using the I 2 statistic and Cochrane's Q-test, with a P < 0.1 or I 2 value > 50% regarded as significant.Subgroup analyses were conducted to detect potential sources of heterogeneity based on the reported median predetermined variables, namely duration of intervention and magnesium supplementation dosage.We applied sensitivity analyses to survey the influence of any particular effect size removal on the combined results.Formal Egger's tests and funnel plots visual checking were also performed to detect publication bias, with a P -value < 0.05 regarded as meaningful. Study characteristics Figure 1 shows the flow diagram of the literature search process.507 papers were recovered in the electronic database searches, out of which 178 were excluded for being duplications.After screening the titles and abstracts of the remaining 329 publications, 315 articles were removed.Ultimately, 10 metaanalyses published between 2002 and 2021 were included in the umbrella meta-analysis, amounting to a total of 8610 participants. 21 , 22 , 26-29 , 19 , 30-32Mean magnesium dose varied from 364 to 440 IU/day, and intervention duration ranged between 8.85 and 14.54 weeks.Detailed characteristics of the included meta-analyses are outlined in Table 1 .Most of the included meta-analyses in this umbrella review were graded as having moderate to high quality; the results of the quality assessment of every article in each of the AMSTAR2 questionnaire items are presented in Table 2 . Meta-regression Subsequent analysis of the relationship between intervention duration (week) and magnesium supplementation dosage (mg/day) with SBP and DBP alterations revealed a significant correlation (Supplementary Figure 2) ( Figure 3 ). Figure 2. Forest plot of the umbrella review on the effects of magnesium intervention on systolic blood pressure. Sensitivity analysis and publication bias After sensitivity analysis, no special arm was found to affect the combined effect size (Supplementary Figure 3).Egger's tests and visual inspection of the funnel plots showed no sign of publication bias ( Figure 4 ). Discussion The present umbrella meta-analysis on the effect of magnesium supplementation on blood pressure summarized the results of 10 meta-analyses.The findings of this assessment support the evidence that magnesium supplementation lowers DBP and SBP in a statistically significant manner, although the effect size is small, hence, suggesting the potential use of magnesium as part of the dietary interventions for the management of hypertension.Although cost-utility analyses are lacking, magnesium supplementation could potentially reduce the economic costs of hypertension treatment.Sufficient evidence demonstrates the link between hypertension and various chronic diseases, 33 but further investigations are warranted to study the effects of magnesium supplementation on other chronic diseases. Magnesium is one of the most common minerals in the human body, with 99% of it distributed intracellularly. 34The role of magnesium in reducing hypertension has been attributed to multiple mechanisms of action, including acting as a calcium channel blocker, competing with sodium binding sites on vascular smooth muscle cells, decreasing intracellular sodium and calcium, enhancing prostaglandin E, binding cooperatively with potassium, inducing vasodilation, improving endothelial dysfunction in diabetic and hypertensive patients. 35Moreover, magnesium induces nitric oxide release from endothelial cells, which acts as vasoactive mediator and produces a synergistic effect with antihypertensive medications. 36The effect of magnesium on osteopontin has also been proposed to be one of the mechanisms involved in inhibiting vascular calcification and reducing BP. 37 According to our umbrella meta-analysis, magnesium supplementation resulted in a statistically significant decrease in SBP (ES = -1.25 mmHg; 95% CI: -1.98, -0.51, P = 0.001) and DBP (ES = -1.40mmHg; 95% CI: -2.04, -0.75, P = 0.0 0 0).Similar results have been obtained by several meta-analyses, including a meta-analysis of 11 RCTs conducted by Asbaghi et al. with magnesium doses ranging from 36.49 to 500 mg/day and intervention duration of 4 to 24 weeks, which reported a significant reduction of SBP and DBP, 21 as well as one by Dibaba et al. which reported that administration of 365 to 450 mg/day of elemental magnesium resulted in a reduction of SBP by 4.18 mmHg and DBP by 2.27 mmHg, 27 and other meta-analyses. 38n contrast with our study results, Verma et al. reported that magnesium supplementation provides a moderate beneficial impact on SBP but not on DBP, 22 which could be because their metaanalysis on hypertension included only four studies, with a high heterogeneity among those studies.Song et al. reported that magnesium supplementation did not provide beneficial effects on SBP and DBP, 30 however, the study population of the meta-analysis included only patients with type 2 diabetes mellitus, which could be the reason for this conflicting result; moreover, the main focus of the study was the effect of magnesium on glycemic control rather than blood pressure. According to The American Food and Nutrition Board, the recommended dietary magnesium intake for people aged 31-70 years is 420 mg/day for males and 320 mg/day for females. 39In our subgroup analysis, a significant reduction in SBP and DBP was observed in magnesium supplementation with doses ≥400 mg/day and treatment duration ≥12 weeks.In line with our results, a meta-analysis conducted by Asbaghi et al. reported in their subgroup analysis that magnesium supplementation at a dose of > 300 mg/day or with a duration of > 12 weeks provided significant beneficial effects on both SBP and DBP. 21In another meta-analysis, magnesium supplementation of > 370 mg/day resulted in SBP reduction by 0.66 mmHg and DBP reduction by 0.57 mmHg. 19he reduction in BP due to magnesium supplementation could have beneficial effects on cardiovascular outcomes.A clinical trial reported that 0.8 to 2 mmHg reduction of SBP could help in decreasing the risks of coronary artery disease, stroke, and heart failure, with a 2 to 3 mmHg decrease of BP reducing the risk of stroke by up to 6 to 12%. 40Hence, the reduction of BP by magnesium supplementation, although not enough to recommend magnesium as an antihypertensive monotherapy, could have clinical significance when used as a dietary supplement in addition to other antihypertensive medications in subjects with hypertension. Clinical Implications Our umbrella meta-analysis of randomized controlled trials revealed that magnesium supplementation significantly reduced SBP and DBP.Hence, magnesium supplementation can be used in conjunction with antihypertensive medications to cause a significant decrease in blood pressure. Strengths and Limitations To the best of our knowledge, this is the first umbrella metaanalysis to find the effect of magnesium supplementation on BP.We performed subgroup analysis based on the dose and duration of magnesium.Since our review included only meta-analyses of RCTs, bias was significantly reduced.In addition, Egger's test and visual inspection of the funnel plot revealed no publication bias. However, our review is not without limitations.Significant heterogeneity was found among the included meta-analysis, and the dose and duration of magnesium interventions in patients with specific comorbidities were not reported.Thus, we recommend that future studies focus on the effects of magnesium supplementation on blood pressure in patients with comorbidities.Moreover, overlapping is unavoidable in any umbrella review which is another limitation of this review. Conclusion The findings of the present umbrella meta-analysis showed a small but statistically significant decrease of SBP and DBP with magnesium supplementation, with significant effects with doses ≥400 mg/day and duration ≥12 weeks.Although the reduction of BP by magnesium supplementation is not enough to recommend its use as monotherapy for hypertension, it could have clinical significance when used as a dietary supplement in addition to other antihypertensive medications in patients with hypertension.Further studies are required to determine the effects of magnesium supplementation on BP in patients with comorbidities. Declaration of competing interest No conflict of interest to declare. Figure 1 . Figure 1.Flow chart of the study selection process for the umbrella meta-analysis. Figure 3 . Figure 3. Forest plot of the umbrella review on the effects of magnesium intervention on diastolic blood pressure. Figure 4 . Figure 4. Funnel plot of the WMD versus the SE of the WMD.WMD, weighted mean difference; CI, confidence interval; SE, Standard error. Table 1 Study characteristics of included studies. Table 2 Results of the assessment of the methodological quality of meta-analysis.
2024-08-02T15:23:15.399Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "4a039e7d7eceb145e3d268a2a62d87f1413e1747", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.curtheres.2024.100755", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1d81310a2314ca2eae9da3e69e5ee1bf3b1a3a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55279580
pes2o/s2orc
v3-fos-license
Study of sensory-motor and somatic development of the offspring of rats ( Wistar ) treated with caffeine The influence of caffeine, administered to rats, on the somatic and sensory-motor development of the offspring was investigated. Female Wistar rats were divided into a control group and a treated group and received drinking water and a 0.1% solution of caffeine orally, respectively. The offspring, also divided into a control group and a treated group, received daily monitoring until the 20th day of life to verify alterations in somatic neural development. The offspring of the treated group had reduced weight on the day of birth and on the 1st, 5th, 15th and 20th days of life; shorter snout-anus length (evaluation done daily); shorter snout-tail length on the day of birth and on the 1st, 5th and 10th days of life, and signs of retardation of somatic and sensory-motor maturation. These results allowed the conclusion that administration of caffeine to rats affects somatic and sensory-motor development of offspring. INTRODUCTION Physiologic adaptation, also called metabolic programming or metabolic impression, can be activated in an organism when exposed to a particular influence in the intrauterine environment or early stages of life, while still passing through critical phases of tissue and organ development (Barker, 2000). Any disturbances to the maternal organism before pregnancy can affect the offspring into adult life (Silveira, 2004).If the mother consumes certain substances during pregnancy which cross the placental barrier, the fetus is exposed to these substances and the many effects they can cause. Caffeine intake in animals is a risk factor due to adverse reproductive effects.Caffeine has a rapid absorption in the gastrointestinal system, and passes into blood and fetal tissues including the central nervous system, when it is administered to the mother (Matijasevich, Santos, Barros, 2005).Highest blood caffeine levels are reached between 3 and 120 minutes, during which time it is rapidly distributed throughout body tissues, achieving equilibrium between blood and tissue levels (Golding, 1995). Although animal studies indicate that caffeine leads to a decrease in fetal intrauterine growth, low birth weight, fetal re-absorption and teratogenesis, these results are still inconclusive in human studies (Souza, Sichieri, 2005).There are reports in the literature of decreased intrauterine growth after mother's consumption of caffeine during pregnancy; however, other articles found no connection between caffeine consumption and low birth weight and prematurity (Bicalho, Barros Filho, 2002), or delayed intrauterine growth (Bicalho, Barros Filho, 2002;Bracken et al., 2003). In rats, longitudinal growth after birth and after weaning, based on tail length with respect to body length, is used to track body growth (Stewart, Preece, Sheppart, 1975;Barbosa, Santiago, 1994).Other studies have evaluated somatic maturation as a form of corporal growth of these rats (Deiró, 1998;Silveira, 2004;Reis, 2005;Oliveira, 2006). After birth, the animal's behavior throughout life will be a fundamental part of its adaptation to the environment (Deag, 1981).Responsiveness to environmental factors is related to the level of maturation of the nervous system (Rodriguez-Perez, Vicente-Perez, Garcia, 1992).In this context, this reflex can be characterized: an elementary coordinated motor action that corresponds to a specific sensorial stimulus (Kandel, Schwartz, Jessell, 1992).In recent decades, studies involving growth and development of the nervous system have been featured in the scientific literature.All areas of the nervous system are affected by exogenous factors, for example: dietary alterations and pharmacologic manipulations of neurotransmitters (Deiró, 1998). Caffeine has been shown to be effective in a onetime dose of 25 mg/kg to prevent apnea in infants, but it can also result in considerable reduction in the blood flow velocity in cerebral arteries.When the same dose was administered again after 4 hours, velocity of blood flow in the cerebral arteries was decreased after the 2 nd dose, while the velocity in intestinal arteries and left ventricle ejection were unaffected (Hoecker et al., 2006). Caffeine is found in many popular drinks (for example: coffee, tea, soft drinks) (Graham, 1978;Kaminsky, Fisberg, 1992) which 80% of the population uses daily; coffee being the most abundant source of caffeine and the most consumed (Camargo, Toledo, 1998).Chocolate and some medicines are other important sources; it has been calculated that approximately 200 non-prescription drugs contain caffeine (Santos et al., 1998). As it is a non-adaptive drug, when consumed regularly the stimulant effect does not decrease.However, individual sensitivity exists in that small doses can cause tremors, throbbing and anxiety for many hours in some people, while others consuming caffeine daily show no toxic effects for years or decades.Clinical tolerance and habit depend on chronic use of the drug.Individuals drinking 5 or more cups of coffee per day, followed by a period of 12 to 16 hours' abstinence, can present withdrawal symptoms such as headaches, lethargy, irritability and inability to work (Kaminsky, Fisberg, 1992). Other psychoactive substances causing dependency also present abstinence syndromes: cocaine withdrawal induces somnolence, depression, tiredness, bradycardia; abstinence from marijuana alters sleep and provokes agitation and irritability; nicotine abstinence is associated with appetite increase and weight gain (O 'Brien, 2005). The objective of this study was to investigate the influence of caffeine, administered to rats, on the somatic sensory-motor development during the first 20 days of life. MATERIAL AND METHODS This experiment was approved by the Ethics Committee Responsible for Experiments in Animals at the Center of Biologic Science of the Federal University of Pernambuco (document n° 65/06, process n° 013280/2006-31). In the experiments, twelve rats older than 21 days, weighing between 30-40g were divided into a control group (cg) and treated group (tg) which received a solution of 0.1% of caffeine, a dose method quoted in the literature by Ohnishi et al. (1986), managed orally by water fountains.The administration of caffeine started on the 21 st day of life, when weaning occurs, and continued until weaning of the offspring; the solution of caffeine and the commercial food were offered ad libitum. After 90 days of life, the rats were mated and during this period the vaginal smear was analyzed with an optical microscope (Carl Zeiss-Germany) to observe the presence of sperm and to detect the beginning of pregnancy.Offspring of the control group (ocg) and the offspring of the treated group (otg), were weighed on a digital scale (Andhr-200, A&D Company) with sensitivity of 0.001 g and length of snout-anus (sa) and snout-tail (st) were measured with a slide caliper (accuracy 0.05mm) until the 20 th day, to test if the somatic neural growth was compromised by treatment. Comparisons between treated and control group were performed for each individual period using Student´s t-Test for independent samples with *p<0.05.The results were expressed as mean ± standard deviation in each treatment group (calculations performed with Microsoft Excel ®). RESULTS The rats whose mothers were treated with caffeine had lower weight on the day of birth and on the 1 st , 5 th , 15 th and 20 th days of life with differences of 10.75%, 18.95%, 19.49%, 15.74%, 35.42%, respectively, compared to the control mothers (Figure 1).During the treatment, there was no significant difference in the treated mothers' body weight compared to the control group.The quantity of diluted caffeine consumed versus the consumption of water was an average of 53.28 mL and 57.85 mL, respectively. Offspring of rats treated with caffeine (otg) had shorter snout-anus (fa) length of 8.62%, 6.39%, 13.94%, 11.54%, 10.37% and 34.67% on the day of birth and on the 1 st , 5 th , 15 th and 20 th days of life, respectively, compared to the offspring of the control group (ocg).Snout-tail (fc) length, in the treated group (otg) showed lower values by 8.43%, 7.59%, 13.74% and 12.54% on the day of birth and on the 1 st , 5 th , 15 th and 20 th days of life, respectively, compared to the ocg group.(Figures 2 and 3) The offspring of rats that consumed caffeine presented retardation in somatic development.The differences in percentage found for the indicators of eu, aco, ei and eo, after comparison of maturation signals between otg and ocg, were 28%, 22.3%, 11.1% e 4.1%, respectively (Table I). The offspring of rats which were administered caffeine presented retardation on the day of appearance of the sensory-motor maturations investigated.According to the day of appearance of the reflexes of r, ca, as and fr, the differences in percentage found, comparing otg and ocg, were of 204.9%, 93.35%, 16.22% and 14.68%, respectively (Table II). DISCUSSION Rats submitted to treatment with caffeine during pregnancy and lactation yielded a higher volume of milk, although the values of the daily total production of proteins, lactose and triglycerides was similar to values observed in females without treatment (Hart, Grimble, 1990). This could affect the body weight of both offspring groups of this study.However, although caffeine caused weight modifications, only more detailed studies can conclude if the alterations were of metabolic origin (Barker, 2000;Silveira, 2004), or due to congenital metabolic alterations exclusively in the offspring.The findings of weight difference between the groups, corroborate other studies available in the literature (Soyka, 1979;Aeschbacher et al., 1980;Evereklioglu et al., 2003). During the period of fetal and neonatal development, body tissues are growing (histogenesis) and the so animals grow (Bernardi, 1996).However, appropriate conditions are necessary for a species to reach the correct body gro-wth for age and exposure to some substances can interfere significantly with the growth and maturation process. Effects of caffeine on body growth of rats are debatable.Caffeine is thought to diminish bone mineral density, which may occur through a change in osteoblast viability, leading to an increase in apoptosis in these cells (Tsuang et al., 2006).Caffeine, associated with a protein diet in rats, is capable of altering the bone mineral content of the mandibula and femur of the offspring.However, treatment with zinc supplements promotes greater than normal bone development in the offspring (Sasahara, Yamano, Nakamoto, 1990). Effects on growth depend on the concentration of the dose selected for treatment.Daily doses of caffeine, starting on the rats´ 5 th day of life, were associated with an increase in the blood concentration of growth hormone (GH); however this applies only to the acute effect of the dose.The chronic effect reflected a discreet increase in this concentration, which led some authors to conclude that caffeine was responsible for the depletion of GH in the pituitary gland (Clozel et al., 1983).The way growth hormone (GH) is secreted is an important factor in its metabolic and somatic performance in rodents (Jaffe et al., 2002). It is possible that the interferences caused by caffeine in the somatic development of the offspring, could be due to toxicological alterations: a substance which can cause retardation in growth or development of specific organs and/or systems can create anatomic and biochemical alterations that generally occur in organisms in the late period of development (Bernardi, 1996). Another way of accompanying the development of rats is through analysis of reflexive actions.The tests of neurologic analysis support confirmation of retardation or precocity of these actions (Silveira, 2004) and provide information about the capacity of an organism to be responsive to an environmental stimulus without having had previous experience.Neurologic analysis can thus measure effects of environmental factors in maturation of the nervous system and also detect changes in cerebral functions in the presence or absence of answers (Rodriguez-Perez, Vicente-Perez, Garcia, 1992). Caffeine is one of the substances capable of stimulating the central nervous system (Clozel et al., 1983;Martin et al., 2005).However, it is also a substance that interferes in the formation of the nervous system by acting as an antagonist of adenosine receptors, (the neuromodulator that controls the release of acetylcholine) an important neurotransmitter for cerebral development.It is also described as capable of promoting an increase in the activity of acetyl cholinesterase, an enzyme that inactivates the action of (Silva et al., 2008).Another reported injury to the central nervous system due to caffeine is retardation in time of the decubitus reflex in the offspring of rats that consumed caffeine, under the same conditions used in this study (Peruzzi, 1985). Associating the results on somatic neural development with this information, the values found in the offspring show that, when administered to females, caffeine is responsible for late development in the nervous system of the offspring. The critical period of cerebral development of the rat, in terms of maturation and sexual differentiation of the brain, occurs between the 6 th and 13 th day after mating, around the 13 th -14 th day of pregnancy until the 21 st postnatal day of the offspring (weaning day) (Bernardi, 1996).However according to some studies, manipulations using a psychoactive substance, after the critical period of development, are also capable of altering cerebral development.A chronic dosage of cocaine, when administered subsequent to weaning of the male offspring, exacerbates sexual behavior and diminishes the development of hippocampal cells (Andersen et al., 2007). According to the data described, an explanation for the fact that the differences between the groups analyzed were more discrepant in the first two cerebral maturation stages was that animals were only mid-way into their development time, where retardation could be the acute cerebral effect of the exposition to the external agent.Smaller differences in the day of appearance of the two last signs of reflex maturation, could be related to the fact that these manifest themselves in a more advanced period of cerebral maturation, suggesting that even when exposed this substance, the cerebral modifications appeared to a lesser degree probably because the nervous system had matured sufficiently to counter act the effects caused by caffeine. FIGURE 1 - FIGURE 1 -Body weight (grams) as a function of age (days) of the offspring of control rats (ocg) and the offspring of treated rats (otg).Values are expressed as mean ± standard deviation, n=15 and *p<0.05. FIGURE 2 - FIGURE 2 -Snout-anus length (centimeters) as a function of age (days) of offspring of control rats (ocg) and of offspring of treated rats (otg).Values were expressed as mean ± standard deviation, n=20 and *p<0.05. FIGURE 3 - FIGURE 3 -Snout-tail length (centimeters) as a function of age (days) of offspring of control rats (ocg) and offspring of treated rats (otg).The values were expressed as mean ± standard deviation, n=20 and *p<0.05. TABLE I - Somatic development of the offspring of control rats (ocg) and the offspring of rats treated with caffeine (otg). TABLE II - Sensory-motor development of the offspring of control rats (ocg) and of the offspring of rats treated with caffeine (otg).Values expressed as mean ± standard deviation of age (days), n=20 *p<0.05
2018-12-08T18:53:27.616Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "bee9a342dffcbaab9db0af7e8387d7ab646972ad", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjps/a/hjFZFCGQDsxckCH4KMBhtTS/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "bee9a342dffcbaab9db0af7e8387d7ab646972ad", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
118577454
pes2o/s2orc
v3-fos-license
Epistemic Restrictions in Hilbert Space Quantum Mechanics A resolution of the quantum measurement problem(s) using the consistent histories interpretation yields in a rather natural way a restriction on what an observer can know about a quantum system, one that is also consistent with some results in quantum information theory. This analysis provides a quantum mechanical understanding of some recent work that shows that certain kinds of quantum behavior are exhibited by a fully classical model if by hypothesis an observer's knowledge of its state is appropriately limited. Introduction The problem of understanding the quantum world continues to give rise to numerous debates. While the tools of textbook quantum theory allow us to calculate probabilities of measurement outcomes in agreement with experiment, the problem of understanding these in terms of microscopic quantum phenomena continues to perplex beginning students as well as their teachers. There are (at least) two distinct strategies for exploring these questions. One starts with classical physics, which is reasonably well understood, both its mathematical structure and its physical or intuitive interpretation, and tries to see how far classical ideas can be pushed into the quantum domain before they fail. This helps locate the classical-quantum boundary, and identify which classical concepts remain useful once it has been crossed, and which must be abandoned or radically modified. A second strategy starts from a consistent formulation of microscopic quantum theory, and seeks to apply it to larger systems to see how classical physics emerges as a suitable, and sometimes extremely good, approximation to quantum theory at the macroscopic level. Much current research on hidden variable models represents the first strategy. In a version pioneered by John Bell, one starts with hypotheses which seem plausible in classical physics and uses them to deduce consequences, typically inequalities, whose violation by quantum theory and experiment shows that one or more of the assumptions made in the derivation do not apply to the real quantum world. While some of the resulting claims, such as that the quantum world is nonlocal or contextual, do not stand up under scrutiny [1,2,3], this research should nonetheless help us better understand quantum mysteries provided the classical ideas and assumptions underlying the hidden variables approach are clearly and properly identified. Classical ideas are made quite explicit in Spekkens "toy theory" approach [4], where by hypothesis an observer can have only a limited knowledge of the actual (ontic) state represented by some collection of classical variables. This idea has recently been extended in a very careful study [5] of coupled classical harmonic oscillators, by assuming that an observer's knowledge, in the form of a probability distribution, is limited by an epistemic restriction that resembles a quantum uncertainty principle. This restriction allows the authors to reproduce in an explicitly classical model a number of "weird" effects previously thought to lie wholly in the domain of quantum physics. To be sure, this approach does not reproduce the entire gamut of quantum phenomena, but the results encourage the authors to believe, as stated in their introduction, that there might be an axiomatization of quantum theory in which the first axiom states a fundamental restriction on how much observers can know about a system, and the second embodies some novel principle about quantum reality (rather than knowledge thereof). They then add, "Ultimately, the first axiom ought to be derivable from the second because what one physical system can know about another ought to be a consequence of the nature of the dynamical laws. " We shall show that this is not a vain hope. The "novel principle" has already appeared in the physics literature as part of an approach embodying the second strategy mentioned above, the effort to understand how classical physics is an approximation to a more exact underlying quantum theory when the latter is properly understood and interpreted. What is known as the "consistent" or "decoherent" histories-hereafter referred to simply as "histories"-program, introduced in [6,7,8], provides, on the one hand, a fully consistent and paradox free (so far as is known at present) approach to microscopic quantum phenomena, and on the other a means for showing that the laws of classical mechanics are in appropriate circumstances a good approximation to the underlying and more exact quantum physics. In particular the single framework rule of Hilbert space quantum mechanics is a novel principle (relative to classical physics) that leads in a rather natural way to an epistemic restriction of a quite fundamental sort: what an observer can know is limited by the nature of quantum reality, since that which does not exist also cannot be known. The remainder of this paper is organized as follows. Since no literature pertaining to the histories approach is as much as mentioned in [4,5], Sec. 2 contains a brief summary of the relevant principles, including the single framework rule. For additional details we refer the reader to other summaries as well as more extensive treatments of the basic ideas; the following are listed in order of increasing length: [9,10,11,12,13]. In Sec. 3 we show how these principles resolve the measurement problem(s) of quantum foundations, leading in a rather natural way to restrictions on what can be learned using measurements. Section 4 argues that the results of Sec. 3 are consistent with quantum information theory. The results are summarized in the concluding Sec. 5. Quantum properties A key idea that goes back to von Neumann, Ch. III of [14], is that a physical property-something which can be true or false, such as "the energy is between 2 and 3 J"-is represented in quantum mechanics by a (closed) subspace of the quantum Hilbert space or, equivalently, by the projector (orthogonal projection operator) onto this subspace. (Here and later we assume a finite-dimensional Hilbert space; infinite dimensions complicates the mathematics without resolving any of the quantum conceptual difficulties.) A physical variable or observable, such as energy or angular momentum, is represented by a Hermitian operator which can be written in the form A = a j P j , P j = P † j = P 2 j , Here the {P j } are a collection of projectors that form a (projective) decomposition of the identity operator I, and the {a j }, each of which occurs but once in the sum, so j = k means a j = a k , are the eigenvalues of A. The property that the physical variable A takes on or possesses the value a j , thus A = a j , corresponds to the projector P j or, equivalently, the subspace P j onto which P j projects. In classical mechanics a property corresponds to a set of points P in a classical phase space Γ, as in Fig. 1(a), and the classical counterpart of a projector is an indicator function P (γ) on the phase space, which takes the value 1 if γ lies inside P, and 0 if γ lies in the complement P c of P, the points in Γ that are not in P. Obviously the two descriptions, using the set P or the indicator P , are equivalent, and set theoretic operations on sets correspond to arithmetic operations on indicators. Thus the indicator of the intersection P ∩ Q, the property "P AND Q" or P ∧ Q, is the product P Q, and the indicator for P c , the property "NOT P " or ¬P , is given by I − P , where I is the function on Γ everywhere equal to 1. In quantum mechanics, again following von Neumann, the negation ¬P of a property P is given not by the set theoretic complement P c of the subspace P, but instead by its orthogonal complement P ⊥ , the collection of all kets (vectors) which are orthogonal to those in P. Indeed, P c is not a subspace, whereas P ⊥ is a subspace with projector I − P . The situation is shown schematically for a two-dimensional Hilbert space in Fig. 1(b), where the subspace or ray consisting of all multiples of some nonzero |ψ is labeled P and its orthogonal complement is the ray P ⊥ . In the classical case any point that is outside P is inside P c , corresponding to the two possibilities that this property is either true or false. By contrast, in the quantum case there are rays, such as Q in Fig. 1(b), which are different from both P and P ⊥ . Thus once one accepts von Neumann's prescription for quantum properties and their negations, a prescription which lies behind but is seldom clearly explained in textbook discussions, it is clear that the move from classical to quantum mechanics must include some changes in ideas about logical reasoning and truth. This becomes even clearer when considering the conjunction, "P AND Q" or P ∧ Q, of two distinct physical properties. In the classical case this corresponds to the intersection P ∩ Q of the two sets, or the product P Q of their indicators. For the quantum case Birkhoff and von Neumann [15] proposed using the intersection of two subspaces, which is itself a subspace, to represent the conjunction of quantum properties, with the disjunction, "P or Q or both", P ∨ Q, defined as the span of the set-theoretic union of P and Q, consistent with the usual rule that ¬(P ∨ Q) = (¬P ) ∧ (¬Q). While this seems plausible, the resulting logical structure promptly leads to paradoxes-see Sec. 4.6 of [13] for a simple example-if one employs the usual rules for reasoning about properties. Birkhoff and von Neumann were aware of this, and their remedy was to abandon the distributive law P ∧ (Q ∨ R) = (P ∨ Q) ∧ (P ∨ R) as a rule of reasoning in the scheme of quantum logic they proposed. Despite a great deal of effort, quantum logic has not turned out to be a useful tool for understanding quantum mechanics and resolving its conceptual difficulties [16,17]. 1 The problematic nature of quantum conjunctions is also evident when one uses projectors. The product P Q of two projectors, corresponding to P ∧ Q, is itself a projector if and only if P Q = QP , in which case it projects onto the intersection P ∩ Q of the corresponding subspaces. But if the two projectors do not commute, neither P Q nor QP is a projector, and there is no simple relationship between either of them and the projector onto P ∩ Q. In the histories approach, unlike quantum logic, this is dealt with by introducing a syntactical rule, an instance of the single framework rule, that says that the conjunction P ∧Q is only defined if the projectors commute; otherwise, when P Q = QP , the conjunction of these properties is undefined or meaningless (in the sense that this interpretation of quantum mechanics assigns it no meaning). Note the distinction between a false statement and a meaningless statement, such as P ∧ ∨ Q in ordinary logic. The negation of a false but meaningful statement is a true statement, whereas the negation of a meaningless statement is equally meaningless. To better understand what is and is not implied by the single framework rule and its relation to textbook quantum mechanics, consider the Hilbert space of a spin-half particle, with the orthonormal basis {|z + , |z − } corresponding to S z = +1/2 and −1/2 in units of , and the projectors-we use [ψ] for |ψ ψ| if |ψ is normalized-[z + ] and [z − ]. These projectors commute; in fact [z + ][z − ] = [z − ][z + ] = 0, the zero operator, which in quantum mechanics represents the proposition that is always false. On the other hand, S x = +1/2 and −1/2 correspond to the projectors [x + ] and [x − ], projecting on the rays containing |x ± = (|z + ± |z − / √ 2, neither of which commutes with either [z + ] or [z − ]. The single framework rule says that it is meaningless to simultaneously assign values to S x and S z , e.g., "S z = +1/2 AND S x = −1/2" makes no sense. Note that the single framework rule does not at all forbid a quantum discussion or description using either S x or S z , both of which could be individually meaningful or useful; what it says is that the combination lacks physical meaning. One way to see that the combination "S z = +1/2 AND S x = −1/2" cannot be defined is to note that every ray in a (complex) two-dimensional Hilbert space can be interpreted to mean that the some component of spin angular momentum, corresponding to some direction in space, has the value +1/2. There are no rays left over, thus no room in the Hilbert space, for a property representing such a (supposed) conjunction. 2 That it does not make sense is also implicit in the assertion found in textbooks that there is no way to simultaneously measure S x and S z . This is correct, and we shall say more about measurements in Sec. 3. However, students would be less confused were they given the fundamental reason behind this: it is impossible to measure what is not there. In histories quantum mechanics the single framework rule is a basic tool for resolving all manner of quantum paradoxes, or at least taming them in the sense of changing them from unresolved conceptual difficulties into interesting examples of how the quantum world differs from that of everyday experience. The reader will find numerous examples in Chs. 19 to 25 of [13]. Probabilities The standard (Kolmogorov) probability theory used in the histories approach requires three things: a sample space S of mutually exclusive possibilities, an event algebra E, and a probability measure M that assigns probabilities to the elements of E. Classical statistical mechanics employs the phase space Γ as the sample space. However, a more useful analogy for discussing the quantum case is that of a coarse graining of Γ formed by dividing it up into a finite number of nonoverlapping cells which together cover the entire space. With cell j is associated an indicator function P j (γ) equal to 1 if the point γ lies in cell j, and 0 otherwise. Since the cells do not overlap, the product P j P k of two indicators is 0 if j = k. This corresponds to the fact that the different cells which form the sample space represent mutually exclusive possibilities. In addition, j P j = I corresponds to the fact that at any given time the phase point γ representing the system must be in one of the cells; thus one and only one of these mutually exclusive possibilities is true. The simplest choice for the event algebra E is the collection of all subsets of Γ formed by unions of some of the cells which make up the sample space, with the indicator function of the union equal to the sum of the indicators of the cells of which it is composed. Including the empty set, whose indicator 0 is the function everywhere equal to zero, results in a Boolean algebra in that the negation I − P of any P ∈ E and the conjunction P Q of any two of its elements are also members of E. By analogy, in the quantum case a sample space S is obtained by choosing a collection of projectors {P j } which sum to the identity I-which implies that the projectors are orthogonal to each other, P j P k = 0 = P k P j for j = k-as a quantum sample space S. The set of all projectors formed by taking sums of some of the projectors in {P j }, plus the 0 operator, is the corresponding quantum event algebra E. The event algebra is called a framework, a term also used for the projective decomposition that generates it. (As there is a oneto-one correspondence between S and E, this double usage should not cause confusion.) The same physical interpretation can be used as in the classical case: the {P j } constitute a collection of mutually exclusive properties, one and only one of which is true at a particular time. Thus in the sample space employed in (1), the observable A will possess one and only one of its eigenvalues. The event algebra allows more general things; e.g., "A has either the value a 2 or a 3 " is represented by P 2 + P 3 . An important difference between the classical and the quantum case is that in the former if one uses two different coarse grainings, two different collections of cells, each of which covers the entire phase space, there is always a common refinement, a coarse graining using cells made up of intersections of cells from the two collections. Its event algebra includes among its members all the members of the event algebras of the two coarse grainings from which it is derived. Exactly the same is possible in the quantum case if and only if each of the projectors in one decomposition commutes with every projector in the other; that is, if the two decompositions are compatible. Otherwise there is no common refinement, and the single framework rule prevents putting the frameworks together in a common probabilistic model. For example, with specific reference to the observable A in (1), let Q be a projector that does not commute with one of the P j . Then the framework {Q, I − Q} is incompatible with {P j }, and according to the single framework rule the question "What is the value of A given that the quantum system has the property Q?" has no meaning. (The situation is different if Q is understood as a pre-probability; e.g., the role played by [Ψ 2 ] in (12) Since the single framework rule lacks any exact classical analog, it is easily misunderstood. The following principles may help prevent such misunderstanding. First, the single framework rule allows the physicist perfect Liberty to construct different, perhaps incompatible, frameworks when analyzing and describing a quantum system. No law of nature singles out a particular quantum framework as the "correct" description of a quantum system; there is, from a fundamental point of view, perfect Equality among different possibilities. The key principle of Incompatibility prohibits combining incompatible frameworks into a single description, or employing them for a single logical argument leading from premisses to conclusions. Finally comes Utility: not every framework is useful for understanding a particular physical situation. It is also important to avoid the mistake of thinking that the physicist's choice of framework somehow influences reality. Instead, quantum reality allows a variety of alternative descriptions, useful for different purposes, which when they are incompatible cannot be combined. Different coarse grainings of a classical phase space, or different views of a mountain from the north and from the south, are classical analogies which may help in understanding Liberty, Equality, and Utility. But Incompatibility requires a quantum example, as provided by the S x and S z descriptions of a spin-half particle discussed in Sec. 2.1 above. The probability measure M in standard probability theory is a nonnegative function µ on the event algebra E. It is additive over disjoint sets, and normalized, µ(I) = 1. For our purposes it suffices to assume that a nonnegative number µ j is attached to each indicator P j in the sample space in such a way that j µ j = 1. From this the probability of elements of the event algebra is determined in the usual way, e.g. Pr(P 2 + P 3 ) = µ 2 + µ 3 . The same procedure works in the quantum case: to each projector P j in the decomposition of the identity (quantum sample space) under consideration one assigns a probability µ j ≥ 0 satisfying j µ j = 1 and then sums of these to the projectors making up the corresponding event algebra. It is important to note that aside from positivity, additivity, and normalization, mathematical probability theory imposes no restrictions on the µ j . The same is true for quantum theory, except that under certain conditions one can use the Born rule and its extensions in order to generate probabilities for a closed system, as discussed below. Probability theory can be understood as an extension of propositional logic, where probability 1 corresponds to a proposition that is true, and probability 0 to one that is false. In order to maintain the same connection in quantum theory, it follows that "true" and "false" must, like probabilities, be frameworkdependent concepts. This dependence has sometimes been thought to imply that the histories approach leads to contradictions, propositions which are both true and false [18,19,20]. However, the single framework rule prevents contradictions from arising [21,22,23,24,25,26], and one can show, see Ch. 16 of [13], that the histories approach provides a consistent scheme for probabilistic inference. Time development In textbook quantum mechanics Schrödinger's equation provides a deterministic unitary time development of "the wave function" until an external measurement causes a mysterious wave function collapse. This approach, found in [14], is widely (and properly) regarded as unsatisfactory. In the histories approach quantum dynamics is always a stochastic process, whether or not a measurement occurs, and solutions to Schrödinger's equation are used to compute probabilities by means of the Born rule and its extensions. Here we summarize the essentials needed for the discussion of measurements in Sec. 3. Quantum stochastic time development can be described using a history Hilbert spaceH, which for a sequence of events at times t 0 < t 1 < · · · t f is a tensor product of f + 1 copies of the Hilbert space H used for the system at a single time, where the customary tensor product symbol ⊗ has been replaced by ⊙ as a matter of convenience, to have a distinctive symbol separating events at different times. An individual quantum history of the simplest sort is a tensor product of projectors and thus itself a projector on the history Hilbert space. Its physical interpretation is "property F 0 at time t 0 , then property F 1 at time t 1 , then . . . ", where "then" could be replaced by "and." In general, successive events are not connected with each other in any way related to Schrödinger's equation. Rather than the most general case we restrict ourselves to the situation in which at time t 0 the projector F 0 = [Ψ 0 ] projects onto a specific initial state |Ψ 0 , and at each later time t m , F m belongs to the event algebra generated by a specific decomposition {P αm m } of the identity, αm P αm m = I. Here the α m are labels, not exponents, the subscript m indicates the time, and different decompositions may be used at different times. The sample space of histories corresponds to a collection {Y α }, where α = (α 1 , . . . α f ) is a vector of labels, and If in addition one includes the special history (I − [Ψ 0 ]) ⊙ I ⊙ I ⊙ · · · I, which is assigned a probability of 0 and hence plays no role in the following discussion, the history projectors in the sample space sum to the history space identityȊ = I ⊙ I ⊙ · · · I, and thus constitute a set of mutually exclusive possibilities, one and only one of which can be said to occur. The collection of all projectors which are sums of some of the Y α forms the event algebra. For a closed system that does not interact with an external environment, solving Schrödinger's equation yields a unitary time development operator T (t, t ′ ) for the time interval from t ′ to t; it is equal to exp[−i(t − t ′ )H/ ] in the case of a time-independent Hamiltonian H. Using this time development operator we define a chain ket for every history Y α in the sample space. A family of histories satisfies the consistency condition, and is called a consistent family, provided the inner product of two chain kets for distinct elements of the history sample space vanishes, where δ(α, α ′ ) is 1 if α m = α ′ m for every m, and is 0 otherwise. When (6) is satisfied the µ α are the (extended) Born probabilities for histories in the sample space, and determine the probabilities for histories in the event algebra in the usual way. Condition (6) is needed to ensure that the probabilities defined in this way satisfy the usual rules of probability theory; e.g., if one sums the joint distribution over all possibilities at a particular time t j the result should be the joint distribution for the events at the remaining times as calculated omitting all mention of t j from the family of histories. For a more detailed discussion of this point see Sec. 10.2 of [13]. In the case f = 1, histories involving only two times t 0 and t 1 , the consistency condition is automatically satisfied, since the projectors at time t 1 are orthogonal to each other, and the probabilities are exactly those given by the usual Born rule. Note, however, that these probabilities refer to states of affairs inside a closed quantum system, not to outcomes of measurements carried out on that system by some external apparatus. The overall consistency of this approach is shown in Sec. 3 below, where measurements themselves are treated as quantum mechanical processes occurring within a (large) closed system. When f = 2 or more, a family of histories involving three or more times, the consistency condition (6) on the orthogonality of chain kets for α = α ′ is quite restrictive. Families of histories for which it is not satisfied cannot be assigned probabilities in a consistent manner. It may be that even when the history projectors of two different consistent families commute with each other, so that there is a common refinement, this refinement does not satisfy the consistency conditions, and so cannot be assigned probabilities. It is then natural to extend the single framework rule to include a prohibition of such combinations. 3 There are various extensions of the type of analysis given above to more general situations. In place of an initial pure state [Ψ 0 ] at t 0 one can use a more general projector or a density operator; in that case chain operators are employed in place of chain kets. Sometimes the weaker requirement that the real part of α|α ′ vanish for α = α ′ , allowing the imaginary part to be nonzero, is used in place of (6), though there are reasons [27] for preferring the stronger condition. 4 Rather than assuming an initial state at t 0 one can use a final state at t f , or indeed some property at an intermediate time, as the "initial" condition. In constructing a history family the choice of projectors at a particular time can be made dependent on which event occurred at an earlier (or at a later) time. Numerous examples illustrating some of these points will be found in [13]. (Using the Heisenberg representation for projectors that enter chain kets or chain operators results in more compact expressions that lead to the same consistency conditions and probabilities as in the more intuitive Schrödinger picture employed here.) Two measurement problems It is clear that any claim to know something about a microscopic quantum system must go beyond elementary human sense impressions and make use of data provided by suitable instruments that amplify quantum effects and, so to speak, make them visible; in particular we need to understand physical measurements as genuine quantum processes. But this is the infamous measurement problem of quantum foundations, which has two parts. The first measurement problem is that of understanding the macroscopic outcome-we adopt the picturesque though outdated language of the position of a visible pointer-in proper quantum mechanical terms. The second measurement problem is to relate the pointer position to the prior microscopic property the instrument was designed to measure. Here 'prior' means earlier in time, since very often the measurement either destroys or radically alters the system being measured: think of the detection of a gamma ray, or the scattering process by which it is inferred that a neutrino came from the sun or from a supernova. We need a quantum theory of retrodiction, inferring something about the past from present data. (Not to be confused with retrocausation, the notion that the future can influence the past.) Obviously, analyzing measurements as physical processes cannot employ measurement as some sort of primitive concept, as in textbooks. Hence the need for a fully consistent description of microscopic quantum properties, one constructed without using measurement as a primitive concept or axiom, as summarized in Sec. 2 above. We shall now show how these principles can be used to resolve both measurement problems. The result will then be used in Sec. 4 to argue that Hilbert-space quantum mechanics itself gives rise, in a rather natural way, to an epistemic restriction which does not need to be added as an extra axiom. Quasiclassical frameworks Describing ordinary macroscopic objects in a consistent, fully quantum-mechanical fashion is a nontrivial problem, and it would be premature to claim that every detail has been worked out. Nevertheless, the work of Omnès [28,29], and Gell-Mann and Hartle [30,31,32] provides a general procedure which seems adequate to the task. We will briefly describe the strategy used by Gell-Mann and Hartle (also see Ch. 26 of [13]). The first idea is that classical properties can be usefully described using a quasiclassical quantum framework employing coarse-grained projectors that project onto Hilbert subspaces of enormous, albeit finite, dimension, suitably chosen so as to be counterparts of classical properties such as those used in macroscopic hydrodynamics. Next one argues that the stochastic quantum dynamics associated with a family of histories constructed using these coarse-grained projectors gives rise, in suitable circumstances, to individual histories which occur with high probability and are quantum counterparts of the trajectories in phase space predicted by classical Hamiltonian mechanics. There are exceptions; for example, in a system whose classical dynamics is chaotic with sensitive dependence upon initial conditions one does not expect the quantum histories to be close to deterministic, but then in practice one also has to replace the deterministic classical description with something probabilistic in order to obtain useful results. A quasiclassical family can hardly be unique given the enormous size of the corresponding Hilbert subspaces, but this is of no great concern provided classical mechanics is reproduced to a good approximation, in the sense just discussed, by any of them. Therefore all discussions which involve nothing but classical physics can, from the quantum perspective, be carried out using only a single quasiclassical framework. And as long as reasoning and descriptions are restricted to this one framework there is no need for the single framework rule, which explains why a central principle needed to understand quantum mechanics is completely absent from classical physics. Sometimes the objection has been raised [33,34] that quasiclassical frameworks are not the unique possibilities allowed by the histories approach to quantum mechanics. In particular, a consistent family can be constructed in which the projectors are quasiclassical up to some time, and then followed by a completely different type of projector at later times, and there is no reason from the perspective of fundamental quantum mechanics to disallow this. However, there is also no reason to prefer it. The histories approach does not deny that other incompatible consistent families can be constructed; it simply insists that this possibility does not invalidate a description employing a quasiclassical framework, which is what is needed for thinking about pointer positions. See the discussion of Liberty, etc. in Sec. 2.2. By analogy, the possibility of using S x to describe a spin-half particle does not invalidate a description based on S z ; what cannot be done is to combine them. A measurement model To see how the histories approach resolves both measurement problems we consider a simple model of the measuring process that, apart from minor changes, goes back to Ch. VI of [14]. Let H S be the Hilbert space of the system to be measured, henceforth referred to as a particle, and H M that of the measuring device. For example, H S could be the 2-dimensional Hilbert space of the spin of a spin-half particle, while the quantum description of its position might be among the variables included in H M . Let {|s j } be an orthonormal basis for H S , with states labeled by a superscript so that the subscript position can refer to time. At t 0 let |M 0 be the initial (normalized) state of the apparatus, while are the initial states of the particle and of the total closed system that includes both particle and measuring device. The c j are complex numbers satisfying j |c j | 2 = 1, so both |ψ 0 and |Ψ 0 are normalized. Let T (t, t ′ ), as in Sec. 2.3, be the unitary time development operator for the total system, and assume it is trivial, equal to the identity operator I = I S ⊗ I M , for t and t ′ both less than some t 1 or both greater than t 2 , and that for the interval from t 1 to t 2 , Here the {|M j } are orthonormal states of the apparatus, M j |M k = δ jk , corresponding to different pointer positions, and the {|ŝ j } on the right side of the equation-note they are not assumed to be the same as the {|s j } on the left side-are normalized, ŝ j |ŝ j = 1, though (unlike von Neumann's original model) not necessarily orthogonal. The transformation (8) is unitary, or, to be more precise, it can be extended to a unitary transformation on H S ⊗ H M , because the orthogonality of the |M j ensures that the states on the right side of (8) are mutually orthogonal, even though this may not be true of the {|ŝ j }. Noting that T (t 2 , t 0 ) = T (t 2 , t 1 ) · I and applying (8) to (7), one sees that unitary time development leads to states for the total system at times t 1 and t 2 . We now wish to consider various consistent families that begin with [Ψ 0 ] = |Ψ 0 Ψ 0 | at time t 0 . One possibility is unitary time development: The histories approach can solve the first measurement problem by using the family in place of F 0 . Here the alternative I − [Ψ 1 ] at t 1 , which occurs with zero probability, has been omitted, and we employ the usual physicist's convention that [M k ] = |M k M k | means I S ⊗ [M k ] on the full Hilbert space H S ⊗ H M . An additional projector R ′ = I − k [M k ] should be included at the final time in (11) so that the total sum is the I, but again it is omitted since its probability is zero. Note that there is no reference to the later particle states |ŝ k in (8); they are in fact irrelevant for discussing the macroscopic outcomes of the measurement. While [Ψ 2 ] cannot be one of the properties at time t 2 in family F 1 , see the discussion of F 0 above, it can very well be used as a mathematical device (a pre-probability in the terminology of Sec. 9.4 of [13]) to calculate probabilities of the different pointer positions: Note that this is a perfectly legitimate and consistent "epistemic" use of |Ψ 2 , since it, like a probability distribution, provides some information about the system, even when it does not represent a physical property. In order to relate the measurement outcome to a prior property of the measured particle and thus solve the second measurement problem, one needs still another family : Here with marginals given by If |c k | 2 > 0 the conditional probability for the earlier property [s j ] 1 given the later pointer position That is to say, from the (macroscopic) measurement outcome or pointer position k at time t 2 , standard statistical inference allows one to infer, to retrodict, that the particle had the property [s k ] at the earlier time t 1 . Also note that, (15), the probabilities for particle properties just before the measurement are identical to those of the later pointer positions. This is what one would expect for ideal measurements. It shows that textbooks (which follow the lead of [14]) in which students are taught to calculate |c j | 2 for the particle alone and then ascribe the resulting probability to the outcome of a measurement, are not wrong, just confusing. It is worth remarking once again that the later particle states, the |ŝ j in (8), play no role in the discussion. In von Neumann's model these were set equal to the |s j , and while this is of course a perfectly legitimate choice, it has given rise to considerable confusion in that "measurements" are often interpreted as involving a correlation between the pointer position and the later particle state, something that should properly be called a preparation, not a measurement. The measurement model discussed above is in some ways rather artificial. However, once one understands its basic features it is possible to construct more realistic models in which the particle states to be measured form a general (projective) decomposition of I S not limited to pure states, the pointer positions are represented not by pure states but by quasiclassical projectors onto appropriate Hilbert spaces, the initial state of the apparatus is a quasiclassical projector or a density operator, and allowance is made for thermodynamic irreversibility in the measurement process. See the relevant sections of Ch. 17 in [13] for details. None of these extensions alters the basic conclusions reached on the basis of the simple model used above. Information Given a system with a d-dimensional Hilbert space, a projective decomposition of the identity can contain at most d projectors, one for every element of some orthonormal basis. And since a probabilistic quantum description must use only a single framework, the maximum amount of Shannon information that can be associated with a sample space of a d-dimensional quantum system, the Shannon entropy H if a probability 1/d is assigned to each possibility, is log d. Thus a qudit of dimension d (using the terminology of quantum information theory) cannot receive or contain or carry more than log 2 d bits of information. This is a fundamental epistemic restriction arising directly from the Hilbert space structure of quantum mechanics. This restriction is confirmed by, or is at least consistent with, various results in quantum information theory. A common scenario is one in which Alice prepares a qudit of dimension d in a known quantum state, chosen from a specified set of possibilities according to some probability distribution, and sends it to Bob, who is allowed to carry out a generalized measurement (POVM), or perhaps a collective measurement on several qudits at the same time, with the aim of determining which states Alice prepared. We assume the protocol is repeated N times, and each time Alice records what she prepares. Similarly, Bob records his measurement outcomes. As both of their records belong to the quasiclassical world they can be analyzed using classical (Shannon) information theory, and a well-known bound due to Holevo (see, e.g., Sec. 12.1.1 of [35]) shows that the Shannon mutual information H(A : B) cannot exceed N log 2 d. This upper bound is actually achieved if Alice randomly (with equal probability) prepares states corresponding to some orthonormal basis, and Bob measures in the same basis. Thus the limitation implied by the analysis of properties and measurements in Sec. 3.3 can, with the help of some not altogether trivial mathematics, be shown to be quite general. Neither sophisticated encoding schemes nor the most general of generalized measurements can do any better than what is implied by the analysis in Secs. 2 and 3: at most d N distinct messages, corresponding to log 2 d N = N log 2 d bits, can be constructed using N letters chosen from an alphabet of size d. The reader could be concerned that this epistemic limit might be violated by schemes for dense coding, or is somehow inconsistent with teleportation, or maybe does not apply to quantum, in contrast to classical, information. Let us briefly discuss each of these, starting with the last. Both "quantum" and "classical" information remain ill-defined terms in the quantum information literature, despite (or perhaps because of?) an enormous number of publications. However, one can understand the basic issue by means of a simple example, a d = 2 perfect quantum channel, constructed from a magnetic-field-free pipe into which Alice sends a spin-half particle, which is measured by Bob when it emerges at the other end. If Alice prepares the state S w = +1/2, where w is some specific direction in space: z or −z or x or whatever, and Bob measures in the S w basis, the result will always (probability 1) be S w = +1/2 and not S w = −1/2. The basis must be specified, as there is no way to prepare (or measure) a particle with, say, S z = +1/2 AND S x = 1/2. Consequently this quantum channel with capacity 1 (qu)bit cannot actually transmit information at a rate greater than a perfect classical channel that can only transmit quasiclassical states corresponding to a bit which is either 0 or 1, and always yields the same output as the input. More generally, the quantum capacity cannot exceed the classical capacity (see Secs. 12.3 and 12.4 of [35] for technical discussions), and talking about quantum information (whatever it might be!) does not alter the log 2 d upper limit for one qudit. One can be misled by the very useful but somewhat dangerous visualization of a spin-half particle as a little top spinning about a particular axis. The direction w which Alice uses to produce the state S w = +1/2 might be specified very precisely by some macroscopic setting on her apparatus, so it is tempting to suppose that information about this precise setting, which might amount to many bits (depending on the precision), is then carried away by the particle. But, as noted in Sec. 2.1, there is no room in the quantum Hilbert space for this kind of information, the distinction one might want to make between w and a direction w ′ which is nearby, or even between w and a w ′ which is perpendicular to it (e.g, x instead of z). 5 A less misleading, albeit still classical, visualization is to think of the S w = +1/2 state as a spinning top whose axis is oriented at random, but with the constraint that the w component of its angular momentum be positive. Dense coding is a process by which d 2 messages can be transmitted from Alice to Bob by sending a single d-dimensional qudit through a quantum channel, provided a fully entangled state of two qudits, one in Alice's laboratory and the other in Bob's, is initially available. This might seem to violate the epistemic limit, since log 2 d 2 = 2 log 2 d is clearly larger than log 2 d. The solution to this apparent paradox is a proper microscopic analysis of where information is located at the intermediate time between Alice's preparation and Bob's measurement [36], a type of analysis which is not easy to do using the tools provided in typical textbooks (including those devoted to quantum information), because they provide no systematic way of thinking about microscopic quantum properties during the time interval between the (quasi)classical preparation and the (quasi)classical measurement. To discuss the quantum state of the two qudits, which are initially in a fullyentangled state, one has to use the tensor product of their Hilbert spaces, of dimension d 2 . There is then a projective decomposition of its identity corresponding to an orthonormal basis containing d 2 fully-entangled states, which are used to encode the d 2 messages. Thus there is no violation of the epistemic limit, for two particles are involved. We refer the reader to [36] for a similar discussion of teleportation; once again, all information can be properly accounted for, and the fact that two uses of a d-dimensional classical channel are required to complete the protocol, does not imply that a qudit or the corresponding quantum channel has an information capacity of 2 log 2 d. (Also see the discussion in [37] for further insight into the need for a double usage of a classical channel.) In addition, quantum information theory contains various epistemic restrictions on the information that can be obtained from a quantum system or transmitted from one place to another in the form of inequalities, sometimes referred to as epistemic uncertainty relations. Because some of them are obtained using sophisticated mathematics, and tend to be expressed in terms of (quasiclassical) preparations and measurements, it is not always made clear that these, too, arise from the fundamental Hilbert space structure of quantum theory. So far as is known at present, information inequalities derived in this way, such as in [38], are entirely consistent with experiment, in contrast to Bell inequalities and the like, which do not agree with experiment, and whose derivation is based in a fundamental way upon classical physics (see, e.g., [1]). Conclusion If one assumes following von Neumann, and consistent with textbook quantum theory where it is not always clearly discussed, that properties of a quantum system correspond to subspaces of its Hilbert space, then there is a very natural epistemic restriction on what an external observer can know about a microscopic quantum system. The Hilbert space does not contain, has no room for, combinations of incompatible properties, such as S x = +1/2 AND S z = −1/2 for a spin half particle, and as a consequence these must be excluded from a consistent quantum ontology, as discussed in detail in [12]. And of course what does not exist cannot be known; such an ontological restriction leads automatically to an epistemic restriction. Quantum textbooks already contain a version of this epistemic restriction. Students are told that incompatible quantum properties cannot be simultaneously measured. However, because textbooks treat measurements as a sort of axiom which is incapable of further analysis, a black box which cannot be pried open to see what is going on inside, the nature of this restriction remains clouded in a dense conceptual fog. One needs a consistent quantum analysis of measurements, one capable of resolving both the first and the second quantum measurement problems, to relate this restriction on measurements to mathematical properties of the Hilbert space used in quantum mechanics to represent physical properties. And, as noted in Sec. 4, some of the rigorous inequalities developed by quantum information theorists using the quantum Hilbert space are also epistemic restrictions. In addition to resolving the measurement problems, the histories approach provides a foundation for understanding all the other strange, i.e., nonclassical, quantum phenomena, including those which cannot at present be obtained from a classical model by adding an epistemic restriction (see the discussion in Sec. V of [5]). This is because it has a consistent formulation of the fundamental principles of Hilbert space quantum mechanics, the principles that underlie the generally accepted calculational techniques taught in textbooks. It may be that applying still more restrictions, or perhaps additions, to classical models will eventually yield the correct quantum outcomes. But one can ask whether such a circuitous route, somewhat analogous to tweaking Bohr's semiclassical quantization condition, would be worthwhile, given the availability of a more direct path to understanding the phenomena in question. This is not to say that the study of classical models of the sort considered in [4,5] is without value. It is surely of interest to understand the limits of classical physics when it is pushed as far as possible into the quantum domain. Not least because in the domain where classical physics functions very well, the quasiclassical regime of macroscopic phenomena, it provides a much simpler and easier calculational scheme than any full-scale quantum counterpart. Who would ever want to compute an earth satellite orbit starting with a wave function? However, such studies of classical models will, we believe, be most effective when combined with a consistent and complete microscopic quantum theory, one that takes full account of the noncommutation of quantum projectors, is not dependent upon a vague concept of "measurement," and does not require any additional epistemic restrictions beyond those implied by the formalism itself. It is hoped that the work presented here will contribute to that end.
2013-11-11T22:51:50.000Z
2013-08-19T00:00:00.000
{ "year": 2013, "sha1": "a5ba73b5f4299b13baf9e34854b2941958f6d042", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1308.4176", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a5ba73b5f4299b13baf9e34854b2941958f6d042", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
263282637
pes2o/s2orc
v3-fos-license
Insertion of orthodontic temporary anchorage devices with free gingival grafting for phenotype modification of the peri-implant mucosa Background Mini Implants are widely used in contemporary orthodontics, they provide skeletal anchorage even in non-compliant patients, facilitate orthodontic tooth movement, are easy to place and are relatively inexpensive. Their failure is multifactorial, and the quality of the soft tissue can present a risk limitation for the insertion of TADS. Orthodontic Mini Implants inserted in keratinized gingiva present fewer tissue-related complications and higher survival rate, than those inserted in non-keratinized mucosa. The purpose of this report is to present and describe this novel technique to modify and enhance the peri-implant mucosa of Orthodontic Mini Implants inserted in nonkeratinized gingiva. Methods A free gingival graft was harvested from the palate in combination with a buccal recipient site preparation in the alveolar mucosa and a TAD insertion procedure. Results After twenty-one days of healing, graft integration was observed. One hundred and eighty days after insertion and twelve weeks of loading, none to mild signs of clinical inflammation were documented, and the patient reported no pain or discomfort. Conclusion Within the limitations of this report, free gingival grafting for phenotype modification of the peri-implant mucosa, can benefit patients who need insertion of orthodontic mini-implants in non-keratinized mucosa for orthodontic tooth movement. Introduction Since the first clinical reports of skeletal anchorage with implants, 1,2 Orthodontic Temporary Anchorage devices (TADS) have broadened the scope of orthodontic treatment. 3Mini implants inserted in keratinized gingiva present higher success rates, and patients present fewer tissue-related complications. 4,5or insertion site selection, TADS should be placed in the safest anatomical position, and the presence of keratinized gingiva is preferred.Considering a 1.5 mm diameter mini screw, a minimum of 3 mm space between the roots of neighboring teeth is recommended.The interradicular distance is greater toward the apices.To reduce the possibility of contacting a root, a 30-60 • apical angulation should be used. 6,7Orthodontic Mini Implants placed in movable alveolar mucosa could present a risk of tissue irritation.The insertion zone of opportunity has been described as 2 mm extending apical from the mucogingival junction, where the mucosa has virtually no mobility.But root proximity could still be a problem, requiring placing the mini screws more apical to the Muco-Gingival Junction, where the mucosa is thinner and more mobile and the mini screws present higher incidence of tissue overgrowth, inflammation, micro-tears and ulcerations that cause pain or discomfort to the patient. 8,9hin phenotype and inadequate width of keratinized gingiva (<2 mm) may be a significant risk indicator for peri-implant mucositis, periimplantitis and pain/discomfort during brushing. 10alatal gingival grafts were first described by Bjorn in 1963.][13] Regarding the potential benefits of phenotype modification therapy on gingival health, biotype defines a specific genetic trait, whereas phenotype is a multifactorial combination of genetic traits and environmental factors.Gingival phenotype is site specific and contain components that can change over time depending on environmental factors.Phenotype modification therapy can create a more favorable environment for the prevention of disease and the maintenance of periodontal health. 14he potential benefits of phenotype modification for patients receiving orthodontic treatment include enhanced periodontal health, increased stability of orthodontic outcomes, reduced periodontal complications, shortened orthodontic treatment time, optimal periodontal and orthodontic outcomes, expanded opportunities and increased boundaries for treating malocclusions, reduced need for extractions and orthodontic camouflage, and increased limits for arch expansion. 14rthodontic Mini Implants were inserted with free gingival grafting to modify and enhance the peri implant mucosa from thin and nonkeratinized to thick and keratinized at the insertion site.Extending the insertion zone of opportunity, providing comfort to the patient, preventing soft tissue-related complications, reducing the risk of contacting the roots of neighboring teeth and being able to apply vectors of force closer to the center of resistance. 15AD Grafting can benefit patients with vestibular interradicular TADS inserted in non-keratinized alveolar mucosa, extra radicular TADS in supra-apical o subapical insertion, mandibular retromolar area and posterior mandibular buccal shelf (as shown in Fig. 1). Material and methods From a group of thirty-six orthodontic patients diagnosed for vestibular inter radicular TAD insertion in private practice.Four patients, who did not have enough inter radicular space in keratinized gingiva for TAD insertion, were selected to receive a free gingival graft at the time of insertion.Two patients have been included in this report (6 Mini Implants, three of them with free gingival grafting) that have completed at least six-month observational period after TAD insertion. After cleaning with Chlorhexidine and infiltrate lidocaine, a free gingival graft with 2 mm thickness was harvested from the palate, with a 6 mm biopsy tissue punch after de-epithelization.A resorbable wound dressing hemostat was applied for donor site management. 16(as shown in Fig. 2). With the pilot drill attached to the screwdriver a hole was made at the center of the graft, and the Orthodontic Mini Implant was inserted through the graft with the screwdriver all the way to the neck (as shown in Fig. 3). At the recipient site, the buccal mucosa was stretched, and the insertion point was marked with the pilot drill, the tissue around the selected area was de-epithelized with a 3 mm diameter round diamond bur.A self-drilling titanium Orthodontic Mini Implant with 1.5 mm diameter and 8 mm length was inserted until the graft was in contact with the recipient bed, taking care not to over press the graft to prevent necrosis and allow revascularization (as shown in Fig. 4). The same procedure was performed for the other side of the mouth, if needed.Post operatory instructions, antibiotics, Ibuprofen, and chlorhexidine rinses were prescribed. Results Graft integration was observed after twenty-one days of healing.Pink color, no swelling, no bleeding and firm tissue consistency were documented (as shown in Fig. 5). After the period of healing, patient was instructed of oral hygiene, and during the next six weeks reported no pain or discomfort and no Sixty days after insertion and four weeks of loading, on visual inspection of peri-implant mucosa, none to mild signs of inflammation were observed.Phenotype modification of the alveolar mucosa allowed the insertion of Orthodontic Mini Implants apical to the mucogingival junction, reducing the risk of contacting the roots of neighboring teeth. One hundred and fifty days after TAD insertion and twelve weeks of loading, besides inflammation present in the lip mucosa caused by contact and friction with the head of the mini screw, keratinized periimplant mucosa remained healthy. One hundred and eighty days after TAD insertion, keratinized periimplant mucosa prevented overgrowth of the movable lining mucosa around the Orthodontic Mini Implants (as shown in Fig. 6). After tad removal, Free Gingival Graft can be left in situ as in periodontal therapy, or can be removed per patient request, no complications have been observed after tad removal (as shown in Fig. 7). Discussion Houfar et al., reported that the success rate of 387 Orthodontic Mini Implants was primarily affected by the insertion site.Palatal Mini implants were nearly always successful (98.9%), while buccal Mini implants were clearly less successful (71.1%).It appears that buccal interradicular insertion combined with the type of anchorage resulted in a lower survival rate.The presence of keratinized mucosa, the bone density of the anterior palate and the absence of the roots of teeth seem to play an important role in the success rate of paramedian insertion in the anterior palate. 17heng et al., in a prospective clinical study involving 140 Mini implants, found that the absence of keratinized mucosa around Mini implants significantly increased the risk of infection and failure. 18Lai and Chen, in a retrospective study of 266 TADS, reported a survival rate of 96.2% for Orthodontic Mini Implants placed in buccal keratinized mucosa, significantly higher than those placed in oral lining mucosa with a survival rate of 66.7%. 19Manni et al., found similar results in a retrospective study of 300 mini-screws, with higher success rates when inserted in attached gingiva or the mucogingival line. 20hen et al., found in a retrospective study of 194 patients with 492 TADS that Inflammation of soft tissue surrounding a TAD and early loading within 3 weeks after insertion were the most significant factors predicting TAD failure. 21ang and Loe, in 1972 wrote about the importance of keratinized gingiva and gingival health, years later in 1995, Lang et al., in an experimental study in monkeys, found that the absence of keratinized mucosa around dental implants increases the susceptibility of the periimplant region to plaque-induced tissue destruction.22,23 The presence of at least 2 mm of keratinized mucosa around dental implants has a protective effect on the peri-implant tissue condition, facilitates brushing and plaque control, and is associated with less periimplant inflammation, lower mean alveolar bone loss and improved indices of soft tissue health.24 Phenotype modification to enhance the peri-implant mucosa of orthodontic TADS provide a more favorable environment for the prevention of soft tissue complications and the maintenance of the peri-implant health.Specially in non-compliant orthodontic patients.25,26 Orthodontic Mini-Implants are more frequently placed in nonkeratinized mucosa than dental implants.The failure rate reported in the literature for orthodontic TADS placed in keratinized mucosa versus non keratinized mucosa is significant.27 Similar techniques have been applied to implant dentistry, using healing screws with connective tissue grafts to augment the gingival volume around implants.The author thinks after online research and PubMed review, that this is the first time that Free Gingival Grafting in combination with a TAD is being applied in orthodontics for TAD Fig. 5. Twenty-one days after insertion, healing and graft integration is observed.Fig. 6. One hdred and eighty days after insertion, peri-implant mucosa remained healthy and prevent tissue overgrowth from the alveolar mucosa.insertion in non-keratinized alveolar mucosa. Four patients met the selection criteria from a group of thirty-six patients, but this is mainly a documentation from the first two cases.The other two cases are in the orthodontic preparation process to receive a TAD.To this date all grafting procedures have been successful. Excessive pressure to the graft is prevented not inserting the full length of the mini screw, clinician will see when the graft is in contact with the recipient bed and should stop when all the base of the graft has contact with the recipient bed.Blanching of the graft should be prevented with this method, allowing revascularization and preventing necrosis. Periodontal dressing has not been used to aid for stabilization of the graft, but in the first case, a TAD insertion in the posterior maxilla, the graft did not have contact in all the base, interrupted sutures were added in mesial and distal to stabilize the graft, complete healing was achieved without any further complication.The head of the TAD is wider than the shank, helping to support and stabilize the graft.It is very important to perforate the graft at the center with a pilot drill before inserting the TAD to prevent tearing the graft with the screw. Conclusions Within the limitations of this report of two cases, free gingival grafting for phenotype modification of the peri-implant mucosa, may benefit patients who need the insertion of TADS in nonkeratinized mucosa for orthodontic tooth movement, facilitating oral hygiene and preventing soft tissue-related complications.Additionally, the risk of root contact can be reduced by placing the TADS more apical.Tad insertion with gingival grafting can help clinicians inserting TADS in non-keratinized mucosa for incisor intrusion, molar protraction, mandibular arch retraction, total arch intrusion and distalization.It is important to evaluate the height of the mucogingival junction and the proximity of the roots to determine the need for gingival grafting and select the best site for TAD insertion.Six weeks of healing are necessary for integration and maturation of the graft before loading the TADS. I have obtained informed consent from patient/s and/or legal guardians, and I have included a separate statement in the manuscript file.-I have obtained informed consent from patient/s and/or legal guardians, and I have included a separate statement with this manuscript file. Fig. 7 . Fig. 7. Free gingival graft in the alveolar mucosa of the anterior maxilla, after tad removal.
2023-10-01T15:18:11.038Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "da5c3ba11e4fbc7da591db1a0056e876c3bdf2f3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jobcr.2023.09.005", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3114e5d9c1876bd8bde3783ed3b8a367eba1f677", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
38685008
pes2o/s2orc
v3-fos-license
New perspective of ClC-Kb / 2 Cl channel physiology in the distal renal tubule Zaika O, Tomilin V, Mamenko M, Bhalla V, Pochynyuk O. New perspective of ClC-Kb/2 Cl channel physiology in the distal renal tubule. Am J Physiol Renal Physiol 310: F923–F930, 2016. First published January 20, 2016; doi:10.1152/ajprenal.00577.2015.—Since its identification as the underlying molecular cause of Bartter’s syndrome type 3, ClC-Kb (ClC-K2 in rodents, henceforth it will be referred as ClC-Kb/2) is proposed to play an important role in systemic electrolyte balance and blood pressure regulation by controlling basolateral Cl exit in the distal renal tubular segments from the cortical thick ascending limb to the outer medullary collecting duct. Considerable experimental and clinical effort has been devoted to the identification and characterization of disease-causing mutations as well as control of the channel by its cofactor, barttin. However, we have only begun to unravel the role of ClC-Kb/2 in different tubular segments and to reveal the regulators of its expression and function, e.g., insulin and IGF-1. In this review we discuss recent experimental evidence in this regard and highlight unexplored questions critical to understanding ClC-Kb/2 physiology in the kidney. The molecular profile of ClC-Kb/2 channels.ClC-Kb/2 belongs to the ClC family of anion channels/transporters found in the early 1990s upon the cloning of the first member, ClC-0, from the electric organ of Torpedo marmorata (39).ClC proteins function as dimers having two identical anion conducting pathways operating in a discretional manner (14,15,38).Each ClC monomer has 18 helices, but not all of them project across the membrane (14).Upon reconstitution, two identical current steps with independent kinetics can be detected by single channel analysis, which is consistent with a channel having two pores (also often referred as "doublebarreled" appearance) (reviewed in Ref. 38).A fast gate leads to an independent opening of each pore upon depolarization, whereas a common slow gate simultaneously controls both pores during hyperpolarization.Crystal structure of the bacterial ClC-ec channel suggests a critical role of glutamate at the position 148 in governing the "fast gate" during channel opening and conferring voltage-and Cl Ϫ -dependent properties (15,38).It was also proposed that protonation status of this negatively charged residue may convert at least some ClCs from a channel into an exchanger (Cl Ϫ /H ϩ or NO 3 Ϫ /Cl Ϫ ) mode (1,2,59,65).Interestingly, ClC-Kb/2 and its closely structurally related ClC-Ka (ClC-K1 in rodents) have a hydrophobic valine at this position and do not demonstrate noticeable voltage dependence (46,85).The detailed analysis of ClC-Kb/2 structure has been recently reviewed (4). The unique characteristic of both ClC-K channels is their ability to interact with barttin (a two-transmembrane domain protein; a product of the BSND gene), which serves as an essential ␤-subunit critical to form a functional channel (17).Overexpression of either human ClC-Kb or ClC-Ka fails to produce a notable anionic current in the absence of barttin coexpression.In contrast, mouse and rat ClC-K1 (but not ClC-K2) can form a functional channel at the expense of a reduced activity (46,66).It was demonstrated that barttin binds to the transmembrane domains of ClC-K2 thereby promoting its translocation to the plasma membrane (77).Moreover, ClC-K2 is retained in the Golgi in the absence of barttin (30).Similarly, only a small portion of ClC-K1 can be detected at the plasma membrane when barttin is not coexpressed (66).It has been recently proposed that palmitoylation of the two conserved cysteine residues on positions 54 and 56 of barttin is necessary for proper plasma membrane insertion of ClC-K/ barttin complexes (76). Similar to other ClCs, ClC-K channels expressed in oocytes produce anion-selective current with a permeability sequence of Cl Ϫ Ͼ Br Ϫ Ͼ NO 3 Ϫ Ͼ I Ϫ (17,46,84).Extracellular Ca 2ϩ increases ClC-K activity likely via a direct binding to the I-J loop (17,22,23,46,84).In addition, the channels are extremely sensitive to proton concentration.Basic pH (8.0) increases ClC-K activity, whereas acidic pH (Ͻ6.0) practically closes the channels (17,46,52,55,84,93).A critical role of a histidine residue (H497) in proton-induced inhibition of ClC-K has been proposed (22).Extremely high alkaline media (pH ϭ 11) can also inhibit ClC-Kb likely via deprotonation of a highly conserved lysine (K165) (24) although it is not clear whether this is relevant for ClC-Kb/2 function in the kidney since the channel is still active at pH ϭ 10. ClC-Kb/2 expression in the kidney.ClC-Kb/2 and its highly homologous partner ClC-Ka/1 (ϳ90% identity) were originally cloned from both human and rat kidneys (42,81).The "K" in ClC-K is due to predominantly kidney-specific expression.However, it is recognized that both channels can also be found in the inner ear serving as an important component of the basolateral Cl Ϫ conductance that participates in K ϩ secretion by these epithelia (17,64).With the use of molecular tools, it was originally proposed that rat ClC-K2 can be detected in all segments of the nephron, including the glomerulus, but these results were not consistent (42,92).A critical advance in discovery of ClC-K2 expression in the kidney was made by Uchida's group upon developing ClC-K1 Ϫ/Ϫ mice (53).Using antibodies recognizing both ClC-K1 and -K2 in the knockout mice, they demonstrated that ClC-K2 is expressed on the basolateral membrane of the thick ascending limb (TAL), distal convoluted tubule (DCT), connecting tubule (CNT), and intercalated cells of the collecting duct (CD) with a proposed role in mediating Cl Ϫ reabsorption in these sites (45).Consistent with these results, barttin expression was also detected in TAL, DCT, CNT, and intercalated but not principal cells of the CD (17).Despite close structural homology, the expression patterns of ClC-K1 and ClC-K2 almost do not overlap.ClC-K1 is present predominantly on both apical and basolateral plasma membranes of the thin ascending limb contributing to generation and maintenance of the hypertonic medullary interstitium and countercurrent mechanism (53,82).CLC-K1 Ϫ/Ϫ mice exhibit classical nephrogenic diabetes insipidus, including polyuria and resistance to exogenous administration of vasopressin (53). Consistent with the sites of ClC-K2 expression in the renal tubule, an anion-selective channel with 10 pS conductance has been detected on the basolateral membrane of freshly isolated cortical TAL (25,87), DCT (52), CNT, and cortical CD (55,93).It recapitulates major biophysical properties of ClC-K channels, namely anion selectivity sequence, pH dependence, and activation by extracellular Ca 2ϩ .Unfortunately, there are no reports monitoring single channel ClC-Kb/2 in overexpression systems to perform direct comparison with the basolateral 10-pS Cl Ϫ -permeable channel.On the contrary, multiple reports document ClC-Kb/2-mediated macroscopic whole cell current (17, 22-24, 66, 84).This may potentially indicate that the channel operates in a transporter mode upon overexpression, precluding its electrical resolution at the molecular level.Future studies are necessary to resolve this intriguing possibility.Notably, the recorded 10-pS Cl Ϫ -permeable channel has slow kinetics and does not exhibit a double-barreled appearance (i.e., full and half openings), which is characteristic of the ClC ion channel family (15,18).Again, the nature of this "breaking-the-pattern" gating is enigmatic.Nevertheless, ClC-K1 produces a single channel with 40 pS conductance upon overexpression in HEK293 cells (46).This apparent difference in single channel properties between ClC-K1 and the natively expressed Cl Ϫ -permeable channel further supports the view that ClC-K2 activity underlies the 10-pS Cl Ϫ channel in distal tubular segments of the nephron. ClC-Kb/2 and Bartter's syndrome.Bartter's syndrome is an autosomal recessive tubulopathy that presents in early childhood and is characterized by urinary salt wasting, hypokale-mia, metabolic alkalosis, low blood pressure, resistance to loop diuretics, such as furosemide, and secondary compensatory hyperaldosteronism (44,68).Bartter's syndrome patients also commonly exhibit abnormalities in divalent cation balance and nephrocalcinosis due to altered tubular reabsorption of Mg 2ϩ and Ca 2ϩ , respectively, while hypomagnesemia is rare (16,26).Five different types of Bartter's syndrome have been identified due to mutations in genes encoding specific transport or signaling proteins predominantly expressed in the TAL (44).Missense and nonsense mutations in the Clcnkb gene encoding ClC-Kb underlie Bartters' syndrome type 3, directly demonstrating a role of this Cl Ϫ channel in renal salt reabsorption and blood pressure homeostasis (70).The detailed analysis of different disease-causing mutations in ClC-Kb has been recently provided by an excellent review by Andrini and colleagues (4).Similarly, mutations in the BSND gene encoding barttin underlie Bartter's syndrome type 4 (7).Recall, barttin serves as ␤-subunit of ClC-Kb essential for channel function and translocation to the plasma membrane (17,66).No Bartter's syndrome mutations in the gene encoding highly homologous ClC-Ka channel have been reported in humans.In contrast to Bartter's syndrome type 1 (due to loss-of-function mutations in the NKCC2 transporter) (71) or type 2 (KCNJ1 or ROMK1 channel) (72), patients with Clcnkb mutations do not develop nephrocalcinosis with variable (from low to high) urinary Ca 2ϩ excretion (36,70,96).In general, Bartter's syndrome type 3 manifestations are reminiscent of Gitelman's syndrome, caused by loss-of-function mutations in the gene encoding electroneutral NCC cotransporter (Slc12A3), expressed in the DCT (73).This apparent similarity most likely reflects a critical role of ClC-Kb in mediating basolateral Cl Ϫ exit in the DCT as well.Indeed, insensitivity to thiazide diuretics (targeting NCC) has been described in patients with Bartter's syndrome type 3 (56). Insights into the functional role of ClC-Kb/2 in different nephron segments.ClC-Kb/2 function in the kidney is an important regulator of electrolyte balance, circulating volume, and, by its extension, blood pressure.As has been outlined in the previous section, loss-of-function mutations in ClC-Kb and barttin cause urinary salt wasting, hypochloremia, and low blood pressure.Systemic administration of ClC-K channel inhibitors increased urinary volume and lowered blood pressure in rats, suggesting that ClC-K blockers can be potentially used as a new class of diuretics (50).On the other hand, gain-of-function polymorphism ClC-Kb T481S in humans is associated with elevated blood pressure, presumably due to enhanced renal salt retention (37,69).Similarly, Milan hypertensive rats and Dahl salt-sensitive rats exhibit increased ClC-K2 expression in the DCT and renal medulla, respectively, which can at least partially contribute to salt sensitivity of blood pressure in these models (11,12). ClC-Kb/2 is expressed on the basolateral membrane of three different tubular segments: TAL, DCT, and CNT/cortical collecting duct (CCD) (3,45).However, much less is known about specific functions and the relative contribution of ClC-Kb/2 in each region.In TAL and DCT, this channel is expressed virtually in all cells, where it is thought to take part in transcellular Cl Ϫ reabsorption working in tandem with the apically localized transporters NKCC2 and NCC, respectively (Fig. 1A).In addition, the existence of the electroneutral basolateral K ϩ -Cl Ϫ pathway, most notably via the KCC4 cotransporter (Slc12a7), has been reported in TAL, DCT, and CCD (9,83).However, this coupled K ϩ and Cl Ϫ exit cannot compensate for the loss of function of either ClC-Kb/2 or K ϩ K ir 4.1/5.1 channels (see below).No clinical syndrome associated with KCC4 mutations has been described so far.It should be noted though that mice deficient of KCC4 develop renal tubular acidosis probably resulting from impaired Cl Ϫ recycling across the basolateral membrane of acid-secreting ␣-intercalated cells of the CCD (9). It is interesting that ClC-Kb/2 expression in the distal renal tubule largely aligns with that of the K ϩ channel KCNJ10/16 (K ir 4.1/5.1).The latter is also found on the basolateral mem- brane in the TAL, DCT, and CNT/CCD (20,47,88,95,97).KCNJ16 (K ir 5.1) alone fails to form a functional channel (10).However, it heteromerizes with K ir 4.1 to form a channel with distinct biophysical properties compared with a K ir 4.1 homomer (78).It was further demonstrated that K ir 4.1/5.1 is the predominant K ϩ channel in the DCT (51,97) and CNT/CCD (47,95).Coordinated actions of the Na ϩ -K ϩ -ATPase and K ir 4.1/5.1 mediate K ϩ recycling across the basolateral membrane, thereby modulating sodium reabsorption in the distal nephron (29).Loss-of-function mutations in the gene encoding K ir 4.1 cause SeSAME/EAST syndrome in humans, leading to electrolyte imbalance reminiscent of Gitelman's syndrome, including salt wasting, hypocalciuria, hypomagnesemia, and hypokalemic metabolic alkalosis (8,67).It is also thought that K ir 4.1/5.1-mediatedK ϩ efflux contributes to setting the resting basolateral membrane potential to establish a favorable electrochemical driving force for transcellular and paracellular transport in DCT and CCD (88,94,97).Despite the fact that ClC-Kb/2 activity underlies electrogenic Cl Ϫ movement across the basolateral membrane, a role of this channel in establishing membrane potential has not been reported.Moreover, the pan-specific Cl Ϫ channel inhibitor NPPB had no effect on the basolateral resting membrane potential of intercalated cells in the CCD prominently expressing ClC-Kb/2 (93).Thus, the current view is that ClC-Kb/2 uses a negative basolateral membrane potential generated by coordinated actions of K ir 4.1/5.1 and Na ϩ -K ϩ -ATPase to drive Cl Ϫ from cytosol to the interstitium. Distal convoluted tubule In the TAL and DCT, ClC-Kb/2 and K ir 4.1/5.1 are expressed in the same cells (Fig. 1A).Furthermore, recent evidence from Wang's group documented a functional coupling between these channels (97).It was proposed that K ir 4.1/5.1 plays a dominant role in determining basolateral membrane voltage creating a favorable driving force for basolateral Cl Ϫ exit.Genetic ablation of K ir 4.1 not only depolarizes the basolateral membrane but also reduces NPPB-dependent Cl Ϫ current.The nature of the functional coupling between K ir 4.1/5.1 and ClC-Kb/2 is not completely understood but may involve downregulation of Ste20-like proline-alanine rich kinase (SPAK) (97).SPAK is a well-established critical downstream effector of the with-no-lysine (WNK) signaling (most notably WNK1 and WNK4), a master signaling network governing electrolyte handling in the distal renal tubule by phosphorylating and controlling abundance of a number of transporters and ion channels, including NCC, NKCC2, and ROMK1 (33,34).Mutations in WNK1 and WNK4 cause familial hyperkalemic hypertension in humans (also known as pseudohypoladosteronism type 2 or Gordon syndrome) (89).It is interesting that intracellular Cl Ϫ through its binding to the catalytic site stabilizes the inactive conformation of WNK1, preventing kinase autophosphorylation and activation (58).Similarly, regulation of NCC by WNK4 also depends on intracellular Cl Ϫ (6).Of note, WNKs have different sensitivity to Cl Ϫ , with WNK4 being inhibited at much lower Cl Ϫ concentrations than WNK1 (79).Therefore, ClC-Kb/2-mediated Cl Ϫ exit might be an important determinant in defining the functional status of the WNK network in the distal tubule depending on Cl Ϫ delivery and activity of the apical transporters.Future studies will likely test this plausible hypothesis.In addition, it is also not known whether a reciprocal regulation of ClC-Kb/2 by WNKs or their downstream effectors, such as SPAK, can occur. The alliance between ClC-Kb/2 and K ir 4.1/5.1 ends in the CCD (Fig. 1B).This segment contains two fundamentally different cell types.The majority (ϳ70% of total) are principal cells expressing the epithelial sodium channel (ENaC) and ROMK on the apical side and have abundant Na ϩ -K ϩ -ATPase expression on the basolateral membrane (57).These cells control Na ϩ reabsorption or Na ϩ /K ϩ exchange (57,74).The remaining 30% are intercalated cells, which are primarily, but not exclusively (20,32,48), involved in regulation of acid-base balance and transcellular Cl Ϫ reabsorption (63,86).They possess a different set of transporting proteins and can be further subdivided into acid-secreting A type, base-secreting B type, and non-A non-B, also called "transitioning" type.The interested reader may refer to a recent comprehensive review for more details about classification and morphological aspects of intercalated cells (63).Fluorescent and electrophysiological studies suggest that principal and intercalated cells are not electrically coupled through gap junctions (54,90).Indeed, they are able to maintain different resting basolateral membrane voltages that are close to the equilibrium potential for K ϩ (approximately Ϫ70 mV) for principal and for Cl Ϫ (approximately Ϫ15 mV) for intercalated cells (54,94).It is interesting that ClC-K2 channel and its regulatory subunit barttin are located on the basolateral membrane of intercalated cells only (45,55,80), whereas K ir 4.1/5.1 expression is restricted to the basolateral membrane of principal cells (47,94,95).Using patch-clamp electrophysiology in freshly isolated CCDs, Teulon's group failed to detect ClC-K2 and K ir 4.1/5.1 in the same patch (47).However, there is a small overlap between functional expression of both channels in the CNT (55), likely reflecting a transition from nephron segments where ClC-K2 and K ir 4.1/5.1 are expressed in the same cell (TAL and DCT) to those where the channels are "divorced" to distinct principal and intercalated cells (CCD). While the exact physiological role of ClC-Kb/2 in the intercalated cells of the CCD is not firmly established, it is viewed that this channel is likely involved in trans-cellular Cl Ϫ reabsorption driven by the electroneutral Cl Ϫ /HCO 3 Ϫ transporter Slc26a4 (pendrin) in base-secreting B type of intercalated cells and the Slc4a11 transporter acting as an electrogenic Cl Ϫ /HCO 3 Ϫ exchanger or a Cl Ϫ channel in acid-secreting A type of intercalated cells on the apical side (Fig. 1B) (91).It is interesting that functional expression of ClC-K2 in intercalated cells is comparable with that in TAL and DCT cells (52,87).Taking into account much lower capacity of CCD to reabsorb Na ϩ (3 vs. 25% for TAL and 5-10% for DCT), ClC-Kb/2 may also be involved in processes other than equimolar Cl Ϫ transport at this site.Because of its remarkable pH sensitivity, ClC-Kb/2 can participate in regulation of acid-base balance performed by intercalated cells.As has been described above, Bartter's syndrome type 3 caused by loss-of-function mutations in ClC-Kb is characterized by metabolic alkalosis (70).Thus, it is possible that channel dysfunction in base-secreting B-type intercalated cells may at least partially be responsible for the altered systemic pH balance.Again, future studies are necessary to explore a possible role of ClC-Kb in urinary excretion of acid. Regulation of ClC-Kb/2 by insulin and insulin-like growth factor-1 in CCD.Despite the unequivocal role of ClC-Kb/2 in controlling water and electrolyte balance in the kidney, little has been done toward uncovering systemic factors regulating expression and function of the channel.Using native CCDs, our group has recently reported a direct action of insulin and insulin growth factor-1 (IGF-1) on single channel ClC-K2 activity (93).Insulin and IGF-1 have substantial structural similarity, and the same is also true for their respective insulin and IGF-1 receptors (43,62).Apart from action on metabolism, insulin and IGF-1 are known to regulate urinary excretion of electrolytes affecting tubular transport in multiple segments, including CCD (5,27,28).It has been directly demonstrated that insulin and IGF-1 augment ENaC-mediated sodium reabsorption in CCD principal cells (35,60,75).In general, both hormones reduce urinary sodium excretion in humans (13,21).However, they considerably differ with respect to their actions on systemic blood pressure.Thus, in patients with acromegaly, augmented circulating IGF-1 levels result in antinatriuresis and hypertension, which can be corrected with ENaC inhibitor amiloride (40,41).In contrast, the effects of insulin are primarily determined by plasma glucose and K ϩ levels and often do not lead to salt retention and elevation in blood pressure (19,49).Furthermore, IGF-1 reduces renal K ϩ excretion (21), whereas insulin can promote kaliuresis (19), particularly when plasma K ϩ levels are exogenously clamped (31,61).Electrogenic ENaC-mediated sodium entry across the apical membrane of principal cells creates a favorable electrochemical gradient that can then be used to drive K ϩ secretion via ROMK as well as paracellular and transcellular Cl Ϫ reabsorption (reviewed in Refs.57 and 74).This is accompanied by the coordinated actions of the basolaterally localized Na ϩ -K ϩ -ATPase and K ir 4.1/5.1 to extrude Na ϩ and maintain high negative basolateral membrane potential (20,88,94).Both insulin and IGF-1 augment Na ϩ reabsorption in the CCD by stimulating ENaC and K ir 4.1/5.1 and hyperpolarizing the basolateral membrane (94).Interestingly, our group demonstrated that IGF-1 also stimulates ClC-K2 activity in the intercalated cells of freshly isolated murine CCDs in a phosphatidylinositol 3-kinase-dependent manner, whereas insulin blocks the Cl Ϫ channel via a mechanism involving stimulation of MAPK (93).This unexpected observation provides an important insight into mechanisms underlying distinct patterns of urinary electrolyte excretion in response to insulin and IGF-1 (Fig. 2).Thus, IGF-1, by stimulating ENaC-mediated sodium transport in principal and ClC-K2 channels in intercalated cells, facilitates cooperative NaCl reabsorption (i.e., volume retention), thus reducing the driving force for K ϩ secretion by the CCD.In favors coupling of Na ϩ reabsorption with K ϩ secretion (without volume retention) at the apical membrane of principal cells contributing to kaliuresis (Fig. 2).It is also plausible to speculate that modulation of ClC-K2 activity in the CCD might be of clinical importance to dissociate sodium reabsorption from K ϩ secretion at this site, avoiding potentially life threating hyperkalemia as a result of adverse actions of diuretics targeting ENaC, such as amiloride. Apart from insulin/IGF-1 effects, other systemic factors may also affect ClC-Kb/2 function in the kidney.Thus, increased dietary salt intake had no influence on ClC-K2 mRNA in the cortex but reduced ClC-K2 mRNA expression in the outer and inner medulla of Dahl salt-resistant rats and Dahl salt-sensitive rats (12).This reduction was much more prominent in saltresistant compared with salt-sensitive Dahl rats.Thus, increased medullary salt reabsorption may contribute to an inability of these animals to excrete an increased salt load (12).It is unknown if changes in circulating aldosterone levels contribute to the regulation and whether this also translates into changes of ClC-K2 protein expression. Concluding remarks.It has been long recognized that the ClC-Kb/2 channel is a significant contributor to water and electrolyte handing by the kidney with dysfunction leading to Bartter's syndrome type 3. Despite this, ClC-Kb/2 is probably one of the least studied channels in the kidney.We have achieved significant progress in characterization of the diseasecausing mutations and uncovering sites of ClC-Kb/2 expression on the basolateral membrane of distal tubular segments starting from TAL to CCD.Apart from insulin and IGF-1, little is known about cellular and systemic mechanisms controlling channel function and expression.Furthermore, supportive evidence indicates that the channel may serve different purposes in each tubular segment.It is clear that significant experimental effort needs to be invested to thoroughly investigate ClC-Kb/2 function in the kidney since targeting activity of this channel might be of clinical relevance for blood pressure management with little systemic effects considering its almost exclusive localization to the nephron. Fig. 1 . Fig.1.ClC-Kb/2 expression and function in the distal nephron.A: schematic representation of electrolyte transport in the thick ascending limb (TAL, top) and distal convoluted tubule (DCT, bottom).ClC-Kb/2 mediates basolateral Cl Ϫ exit participating in transcellular Cl Ϫ reabsorption together with the apically localized NKCC2 (Slc12A1) and NCC (Slc12A3) in the TAL and DCT, respectively.Activity of the Na ϩ -K ϩ -ATPase and K ϩ Kir4.1/5.1 (KCNJ10/16) channel determines basolateral membrane voltage, creating favorable electrochemical gradient for ClC-Kb/2-mediated Cl Ϫ current.In addition, electroneutral K ϩ -Cl Ϫ exit via KCC4 is outlined.Cl Ϫ dependency of the with-no-lysine (WNK)/Ste20-like proline-alanine rich kinase (SPAK) signaling network is shown for DCT.A similar pathway likely operates in TAL, but the direct experimental evidence is missing.B: schematic representation of electrolyte transport in the cortical collecting duct (CCD).Principal cells express the epithelial sodium channel (ENaC) on the apical side and Kir4.1/5.1 and Na ϩ -K ϩ -ATPase on the basolateral membrane to control Na ϩ reabsorption and K ϩ secretion via the apical ROMK channel.H ϩ (type A)-and HCO 3 Ϫ (type B)-secreting cells regulate acid-base balance and mediate transcellular Cl Ϫ reabsorption via Slc4A11 and Slc26A4 (pendrin) anion exchangers and basolateral ClC-Kb/2 Cl Ϫ channel, respectively. Fig. 2 . Fig.2.Proposed role of ClC-Kb/2 in separating Na ϩ and K ϩ fluxes in CCD upon insulin and insulin growth factor-1 (IGF-1) actions.A: insulin stimulates ENaC and Kir4.1/5.1 in principal cells but inhibits ClC-K2 in intercalated cells.This favors coupling of ENaC-mediated Na ϩ reabsorption with K ϩ secretion via apically localized ROMK channels in principal cells.B: IGF-1 stimulates both ENaC-mediated sodium transport in principal cells and ClC-K2 channels in intercalated cells, thereby facilitating cooperative sodium and Cl Ϫ reabsorption in the CCD.For clarity, only the B type of intercalated cells is shown.
2017-08-28T11:54:52.036Z
2016-05-15T00:00:00.000
{ "year": 2016, "sha1": "77ec426dec92f1f9efee0003ebaea238ff1f08a1", "oa_license": "CCBY", "oa_url": "https://www.physiology.org/doi/pdf/10.1152/ajprenal.00577.2015", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "f9fa8b91b0addb259fccd39bbe8efa32f060ec32", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
195766627
pes2o/s2orc
v3-fos-license
Combinations of Plant Essential Oil Based Terpene Compounds as Larvicidal and Adulticidal Agent against Aedes aegypti (Diptera: Culicidae) Insecticidal plant-based compound(s)in combinations may show synergistic or antagonistic interactions against insect pest. Considering the rapid spread of the Aedes borne diseases and increasing resistance among Aedes population against conventional insecticides, twenty-eight combinations of plant essential oil-based terpene compounds were prepared and tested against larval and adult stages ofAedes aegypti. Initially five plant essential oils (EOs) were assessed for their larvicidal and adulticidal efficacy and two of their major compounds from each EO were identified from GC-MS results. Identified major compounds namely Diallyldisulfide, Diallyltrisulfide, Carvone, Limonene, Eugenol, Methyl Eugenol, Eucalyptol, Eudesmol and α-pinene were purchased and tested individually against A. aegypti. Binary combinations of these compounds were then prepared using sub-lethal doses, tested and their synergistic and antagonistic effects were determined. The best larvicidal compositions were obtained while Limonene was mixed with Diallyldisulfide and the best adulticidal composition was obtained while Carvone was mixed with Limonene. Commercially used synthetic larvicide “Temephos” and adulticide “Malathion” were tested individually and in binary combinations with the terpene compounds. The results revealed that the combination of Temephos and Diallyldisulfide and combination of Malathion and Eudesmol were the most effective combination. These effective combinations bear potential prospect to be used against Aedes aegypti. Plant essential oils (EO) are secondary metabolites comprising different bioactive compounds and have been getting importance as alternative to synthetic insecticide. They are not only ecofriendly and user-friendly but being a mixture of different bioactive compounds, also offer less chance of resistance development 1 . With the accessibility of GC-MS technique, researchers explored the constituent compounds of different plant EOs and more than 3000 compounds from 17500 aromatic plants have been identified 2 most of which were tested for their insecticidal properties and reported to have insecticidal effects 3,4 . Some of the studies highlighted equal or higher toxicity of major constituent compound than its crude EO. But application of a single compound may again leave chance for resistance development like that of chemical insecticides 5,6 . Hence, emphasisis are now-a-days given to prepare mixtures of EO based compounds to enhance insecticidal effects as well as to reduce the probability of development of resistance by the targeted pest population. The individual active compound present in an EO may exhibit synergistic or antagonistic effects in combinations representing the overall activity of the EO and the fact is highlighted well in studies carried out by previous workers 7, 8 . In vector control programme also, EOs and their constituents are incorporated. Mosquitocidal activities of EOs were extensively studied upon Culex and Anopheles. Few studies also attempted to formulate effective insecticide by combining different botanicals with commercially used synthetic insecticide aiming to increase the overall toxicity as well as to minimize the side effects 9 . But study of such formulated compounds against Aedes aegypti is still scanty. Advancement of medical science with development of medication and vaccination help to handle some of the vector transmitted diseases. Adulticidal activity of the eo. Except for the EO of Ocimum sanctum (Os), the rest four selected EO showed clear adulticidal effects with LC50 value lies between 23.37 ppm to 120.16 ppm at 24 h exposure period. The highest adulticidal efficacy was recorded for the EO of Callistemon linearis (Cl) with LC50 value of 23.37 ppm at 24 h post exposure period followed by Eucalyptus maculata (Em) with LC50 value of 101.91 ppm (Table 1). On the other hand, the LC50 value for the Os was not determined as maximum 53% percent mortality was recorded at the highest dose applied (Supplementary Fig. 3). Analysis of effective EO components. Based on NIST library database results, area percentage of GCchromatogram and MS spectral results,two major constituent compounds from each EO were identified and selected ( Table 2). For EO of As, the major compounds identified were Diallyldisulfide and Diallyltrisulfide, www.nature.com/scientificreports www.nature.com/scientificreports/ five combinations were found to have no larvicidal effect. Among the synergistic combinations, the combination between Diallyldisulfide and Temephos was found as the most effective with 100% observed mortality at 24 hours (Table 4). Again, mixture of Limonene with Diallyldisulfide, Eugenol with Temephos showed good potentiality with 98.3% observed larval mortality, (Table 5). Other 4 combinations i.e. Eudesmol plus Eucalyptol, Eudesmol plus Limonene, Eucalyptol plus αpinene, αpinene plus Temephos also showed remarkable larvicidal efficacy with more than 90% observed mortality against almost 60-75% expected mortality (Table 4). However, combination of Limonene with α-pinene or Eucalyptol showed antagonistic response. Similarly, mixtures of Temephos with Eugenol or Eucalyptol or Eudesmol or Diallyltrisulfide were found antagonistic. Again, combination between Diallyldisulfide and Diallyltrisulfide and combination of anyone of these compounds with Eudesmol or Eugenol were antagonistic in larvicidal action. Combinations of Eudesmol with Eugenol or α-pinene were also recorded antagonistic. Acute adulticidal effects of binary mixtures. Among all the 28 binary mixtures tested for adulticidal activity, seven combinations were found to have synergistic actions, six with no effect while other fifteen were recorded with antagonistic effect. The mixture of Eudesmol plus Eucalyptol and Limonene plus Carvone were found more effective with 76% and 100% observed mortality respectively after 24 h than other synergistic combinations ( Table 5). Malathion was observed to show synergistic action in combination with all the compounds excepts with Limonene and Diallyltrisulfide. On the other hand, combination between Diallyldisulfide and Diallyltrisulfide and any one of them with Eucalyptol or Eudesmol or Carvone or Limonene were found antagonistic. Similarly combination of α-pinene with Eudesmol or Limonene, Eucalyptol with Carvone or Limonene, Limonene with Eudesmol or Malathion showed antagonistic larvicidal effects. For other six combinations, the expected and observed mortalities were not found to be significantly different ( Table 5). Bioassay of effective combinations in large insect mass. Based on synergistic effects and sub-lethal doses, finally four combinations (Eudesmol plus Limonene, Eugenol plus Limonene, Diallyldisulfide plus Limonene and Diallyldisulfide plus Temephos) were selected and further tested for their larvicidal toxicity against large numbers of Aedes aegypti. The results showed 100% observed larval mortalities in response to binary combinations of Eugenol-Limonene, Diallyldisulfide-Limonene, Diallyldisulfide-Temephos against 76.48%, 72.16% and 63.4% expected larval mortalities respectively ( Table 6). The combination between Limonene and Eudesmol was comparatively less effective showing 88% observed larval mortality at 24 h exposure period (Table 6). So, in large scale application too, the selected four binary combinations showed synergistic larvicidal effect against A. aegypti (Table 6). www.nature.com/scientificreports www.nature.com/scientificreports/ For adulticidal bioassay, three synergistic combinations were selected to be applied against large numbers of adult A. aegypti. For selection of combinations to be tested against large insect mass, at first emphasis was given on the best two synergistic combinations of terpene compounds that was combination of Carvone plus Limonene and Eucalyptol plus Eudesmol. Secondly one best synergistic combination was selected from the pair of synthetic organophosphate Malathion with terpene compound. Here we consider the combination of Table 3. Sub-lethal concentrations (LC50) of different terpene compounds against 4 th instar larvae and adults of Aedes aegypti. www.nature.com/scientificreports www.nature.com/scientificreports/ Malathion plus Eudesmol as the best combination to be tested against large insect mass because of its maximum observed mortality and the very low LC50 values of the constituent candidates. Malathion showed synergistic action while combined with α-pinene, Diallydisulfide, Eucalyptol, Carvone and Eudesmol. But if we look at the LC50 values, the value for Eudesmol was the lowest (2.25 ppm). The calculated LC50 values for Malathion, α-pinene, Diallydisulfide, Eucalyptol, Carvone were 5.4, 716.55, 166.02, 17.6, 140.79 ppm respectively. These values indicated that the combination between Malathion and Eudesmol as the best combination from the dose point of view. The results revealed that the combination of Carvone plus Limonene and Eudesmol plus Malathion showed 100% observed mortality against 61% to 65% expected mortalities. Another combination, Eudesmol plus Eucalyptol showed 78.66% mortality against 60% expected mortality after 24 h exposure period. All the three selected combinations showed synergistic action in combination even in large-scale applications against adults of Aedes aegypti (Table 6). Discussion In the present investigation selected plant EOs of Mp, As, Os, Em and Cl showed promising lethal effects against larval and adult stages of Aedes aegypti. Larvicidal activity of EO of Mp was recorded highest with LC50 value of 0.42 ppm followed by EOs of As, Os and Em having LC50 value below 50 ppm at 24 h. These findings were in line with the previous studies carried on mosquitoes and another dipteran flies [10][11][12][13][14] . Although larvicidal potency of Cl was comparatively lower with LC50 value of 163.65 ppm at 24 h than other EOs, its adulticidal potential was found highest having LC50 value of 23.37 ppm at 24 h. EOs of Mp, As and Em also showed good adulticidal potential having LC50 value within the range of 100-120 ppm at 24 h exposure period but comparatively lower than their larvicidal efficiency. On the other hand, EO of Os showed negligible adulticidal effect even at the highest dose of treatment. Thus, the result reflects that the toxicity of the plant EOs may vary with respect to developmental stages of the mosquito 15 . It also depends on penetration rate of EO into the insect body, their interaction with specific target enzymes and detoxification ability of the mosquito at each developmental stage 16 . A good number of studies indicate that the major constituent compound(s) are responsible factor for bioactivity of an EO as it comprises the major fraction of the total compounds 3,12,17,18 . Therefore, we considered two major compounds from each EO. From GC-MS result Diallyldisulfide and Diallyltrisulfide were identified as the major compounds of EO of As which was in conformity with the previous reports [19][20][21] . Again, Carvone and Limonene were identified as the major compounds of EO of Mp although the previous report suggested menthol as one www.nature.com/scientificreports www.nature.com/scientificreports/ of its major compound 22,23 . Constituent profile of the EO of Os revealed Eugenol and Methyl Eugenol as the major compounds showing similarity with the findings of earlier researchers 16,24 . Eucalyptol and Eudesmol were recorded as principal compounds present in Em leaf oil which was in line with the findings of some researchers 25,26 but contradicting the findings of Olalade et al. 27 . Dominance of Eucalyptol and αpinene was observed in EO of Callistemon linearis showing similarity with previous studies 28,29 . Whatever intraspecific variation for constituent composition and concentration of EO extracted from the same plant species from different places has been reported and also observed in the present study were influenced by geographical conditions where the plant grows, harvesting time, development stage or age of the plant and occurrence of chemotypes etc. 22,[30][31][32] . The identified major compounds were then purchased and tested for their larvicidal and adulticidal effects against Aedes aegypti. The result revealed that the larvicidal activity of Diallyldisulfide was equal to the activity of crude EO of As. But the activity of Diallyltrisulfide was higher than EO of As. These findings were similar to the findings of Kimbaris et al. 33 working on Culex pipines. However, these two compounds did not show good adulticidal Table 6. Acute effects of Binary mixtures (1:1) of LC50 dose of selected terpene compounds against 4 th instar larvae and adults of Aedes aegypti and type of interaction after large scale application (n = 300 for larva and 150 for adult). www.nature.com/scientificreports www.nature.com/scientificreports/ activity against the target mosquito which was in conformity with the findings of Plata-Rueda et al. 34 worked on Tenebrio molitor. EO of Os was found effective against larval stages of Aedes aegypti but not against adult stages. The larvicidal activity of major individual compound was found lesser than the activity of crude EO of Os. This implies the role of other compounds and their interactions in crude EO. Methyl Eugenol individually possessed negligible activity whereas Eugenol individually possessed moderate larvicidal activity. This finding one-way supported 35,36 and other way contradicted the findings of earlier investigators 37,38 . The difference in the functional group between Eugenol and Methyl Eugenol might make the differences of their toxicity against the same target insect 39 . Limonene was found to possess moderate larvicidal activity while the Carvone showed negligible effect. Similarly, comparatively lower toxicity of Limonene and higher toxicity of Carvone against adults supported the findings of some previous studies 40 and opposed others 41 . Possession of double bond both in endocyclic and exocyclic position might add advantage of these compounds as larvicidal agent 3,41 while as a ketone Carvone with unsaturated α and β carbon might show higher toxicity as adulticides 42 . However, individual performance of Limonene and Carvone were quite lower than the whole EO of Mp (Tables 1, 3). Among the terpene compounds tested, Eudesmol was found to possess highest effects both as larvicide and adulticide with LC50 value below 2.5 ppm and emerged as promising compound for Aedes control. Its performance was better than whole EO of Em though it was not in line with the finding of Cheng et al. 40 . Eudesmol, a sesquiterpene with two isoprene units are less volatile than oxygenated monoterpene like Eucalyptoland thus have a greater potential as insecticide. Eucalyptol by its own showed higher adulticidal than larvicidal activity and both supported and opposed by the findings of earlier workers 37,43,44 . The individual activity was almost at par with the activity of whole EO of Cl. Another bicyclic monoterpene, αpinene was found to possess lower adulticidal effect than larvicidal effect against Aedes aegypti which was opposite to the performance of whole EO of Cl. Overall insecticidal activity of a terpene compound is affected by its lipophilicity, volatility, branching of the carbon-atom, projection area, surface area, functional group and their position etc. 45,46 . The compounds may exert effects by disintegrating cell mass, blocking respiratory activity, interrupting nerve impulse transmissionetc 47 . The larvicidal activity of synthetic organophosphate-Temephos was found to have highest effect with LC50 value of 0.43 ppm, which was in accordance with the findings of Lek-Uthal 48 . The adulticidal activity of synthetic organophosphate -Malathion was recorded as 5.44 ppm. Although, both the organophosphates showed good response against the laboratory strain of Aedes aegypti, development of resistance by mosquitoes against these compounds have been reported from different parts of the globe 49 . However, no such reports of resistance development were found to documentagainst botanicals 50 . Therefore, botanicals are considered as potential alternative to chemical insecticide in vector control programme. Out of 28 binary combinations (1:1) prepared from effective terpene compounds and terpene compound with Temephos to test for larvicidal action, nine combinations were found to have synergistic, 14 combinations with antagonistic and five combinations were found to have no effect. On the other hand, in case of adulticidal bioassay, seven combinations were found to have synergistic, 15 combinations antagonistic, while six combinations were recorded to have no effect. The reason for synergistic action of some combinations might be due to the interactions of candidate compounds on different vital pathways at a time or due to the serial inhibition of different key enzymes of a particular biological pathway 51 . Combinations of Limonene with Diallyldisulfide or Eudesmol or Eugenol were found synergistic both in small scale and large-scale application ( Table 6) while its combination with Eucalyptol or α-pinene was found to show antagonistic effect against larvae. On an average Limonene was found as a good synergist which might due to the presence of Methyl group, good cuticular penetration and different mode of actions 52,53 . It was reported earlier that Limonene could exert its toxic effect by penetrating through insect cuticle (contact toxicity) or targeting digestive system (anti-feeding) or acting on the respiratory systems (fumigant activity) 54 while phenylpropanoid like Eugenol might target metabolic enzymes 55 . Thus, compounds with the different mode of actions in combination might increase total lethal actions in mixtures. Eucalyptol with Diallyldisulfide or Eudesmol or α-pinene was found synergistic but rest of the combination with other compounds were either having no larvicidal effect or having antagonistic action. Earlier studies demonstrated inhibitory activity of Eucalyptol on AChE as well as octapamine and GABA receptors 56 . As cyclic monoterpenes, Eucalyptol, Eugenol, etc. might share the same mode of action like neurotoxic activity 57 thereby minimizing their combined effect by inhibiting each other. Again, the combination of the Temephos with Diallyldisulfide, α-pinene and Limonene were found synergistic that supported previous reports of synergism that occurred between plant product and synthetic organophosphate 58 . The combination between Eudesmol and Eucalyptol was found synergistic against both larval and adult stages of Aedes aegypti which might be due to their different mode of action because of their dissimilar chemical structures. Eudesmol, which is a sesquiterpene might target respiratory system 59 while Eucalyptol, which is a monoterpene might affect acetylcholine esterase enzyme 60 . The combined effect of constituents on two or more target sites might boost total lethal actions of the combination. In case of adulticidal bioassay, Malathion was found to show synergism with Carvone or Eudesmol or Eucalyptol or Diallylldisulfide or α-pinene reflecting it as a good synergistic adulticidal candidate for combination with the entire terpene compounds except Limonene and Diallyltrisulfide. Similar finding of synergism of Malathion with plant extracts was reported by Thangam and Kathiresan 61 . This synergistic response might due to the combined toxic effects of Malathion and phytochemicals on detoxifying enzymes of insect body. Organophosphate like Malathion generally exerts their effect by inhibiting esterases, cytochrome P450 monooxygenase enzymes [62][63][64] . Therefore, combination of Malathion having these modes of action with terpene compounds having different mode of actions might enhance the total lethal effect against the mosquito. On the other hand, antagonistic effect indicates that the selected compounds in a combination are less active than the individual effect of each compound. The reason for antagonism in some combinations might due to the alternation of behavior of one compound by the other compound by changing the rate of absorption, distribution, metabolism or excretion as has been suggested as possible mechanism of antagonism in combination www.nature.com/scientificreports www.nature.com/scientificreports/ of drug molecules by earlier researchers 65 . Again, the possible cause of antagonism might be due to the competition of constituent compounds for single receptor or target site due to similar mode of action. In some cases, non-competitive inhibition of target protein might also occur. In the present study the two organosulfur compounds namely Diallyldisulfide and Diallyltrisulfide showed antagonism possibly for the competition for the same target site. Again, these two sulfur compounds while combined with Eudesmol and α-pinene showed antagonistic and no effect. Eudesmol and α-pinene are cyclic in nature while Diallyldisulfide and Diallyltrisulfide are aliphatic in nature. Based on the chemical structure the total lethal activity supposed to enhance in combination of these compounds as their target sites are usually different 34,47 but experimentally we found antagonistic effect which might be due to some unknown biological interactions of these compounds in living system. Similarly, combination of eucalyptol with α-pinene resulted antagonistic response though the target site of action of these two compounds were reported differently by earlier researchers 47,60 . As both of the compounds are cyclic monoterpene, there may have certain common target site to which they might compete for binding and influenced overall toxicity of the combination pair in the study. Considering LC50 values and observed mortalities, the two best synergistic combinations of terpene compounds viz.pair of Carvone plus Limonene and Eucalyptol plus Eudesmol and one best synergistic combination of the synthetic organophosphate Malathion with terpene compound i.e. Malathion plus Eudesmol were chosen for adulticidal bioassay to test against large insect mass to confirm whether these effective combinations would work against large number of individuals in comparatively larger exposure space. All these combinations showed synergistic response against large insect mass. Similar results were found for the best larvicidal synergistic combinations which were tested against large numbers of A. aegypti larvae. Therefore, it can be stated that the effective synergistic larvicidal and adulticidal combinations of plant based EO compounds are competent candidate against existing synthetic chemicals and can further be used to control Aedes aegypti population. Similarly, the effective combination of synthetic larvicide or adulticide with terpene compounds may further be used to reduce the dose of Temephos or Malathion to be applied against the mosquito. These synergistic combinations with potent efficacy may offer solution to check resistance evolution in Aedes mosquitoes in future. Materials and Methodology establishment of Aedes aegypti colony. Eggs of Aedes aegypti was collected from Indian Council of Medical Research-Regional Medical Research Centre, Dibrugarh and reared in the Department of Zoology, Gauhati University under controlled temperature (28 ± 1 °C) and humidity (85 ± 5%) following the methods described by Arivoli et al. 66 . After hatching, larvae were fed with larval food (powdered dog biscuit and yeast at a ratio of 3:1) while adults were fed on 10% glucose solution. From 3 rd day after emergence, the adult female mosquitoes were allowed to feed on blood of albino rat. Filter paper submerged in water kept in beaker was put inside the cage for egg laying. Bioassay of essential oils. Larvicidal assay. The standard WHO procedure 67 was used with slight modification to investigate the larvicidal toxicity. DMSO was used as emulsifying agent. Initially 100 and 1000 ppm concentration of each EO was tested exposing 20 larvae per replication. Based on the result, a series of concentration was applied and the mortality was recorded from 1 hour to 6 hours at one-hour time interval and at 24-hour, 48 hour and 72 hours after treatment. The sub-lethal (LC50) concentration was determined after 24, 48-and 72-hour of exposure period. Each concentration was assayed in triplicate along with one negative control (water only) and one positive control (DMSO treated water). If the pupation occurred and more than 10% larvae died in the control group then the test was repeated. If mortality occurred in the control groups between 5-10% then, then Abbot's correction formula 68 was used. Collection of plant materials and eo extraction. Adulticidal assay. The method described by Ramar et al. 69 was followed for adulticidal bioassay against Aedes aegypti, where acetone was used as a solvent. Initially, 100 and 1000 ppm concentration of each EO were tested against adult Aedes aegypti. 2 ml of each prepared solution was applied on Whatman no. 1 filter papers (size 12 × 15 cm 2 ) and allowed to evaporate acetone for 10 minutes. Filter paper treated with 2 ml of acetone alone was used as control. After evaporation of acetone, both the treated and the control filter paper were placed in cylindrical tubes (depth 10 cm). Ten numbers of 3-4 days old non-blood fed mosquitoes were transferred in each of the three replicas of each concentration. Based on the result of initial test, different concentrations of the selected oils were tested. Mortality was recorded at 1 hour, 2 hour, 3 hour, 4 hour, 5 hour, 6 hour, 24 hour, 48 hour and 72 hour respectively from the time of the mosquito released. LC50 value was calculated at 24 h, 48 h and 72 h of exposure period. If mortality exceeded 20% in the control batch, the whole test was repeated. Again, if mortality in the controls was above 5%, results with the treated samples were corrected using Abbott's formula 68 . Analysis of effective essential oil components. For analysis of the constituent compounds of the selected EO, Gas chromatography (Agilent 7890A) and mass spectrometry (Accu TOF GCv, Jeol) was performed. GC was equipped with a FID detector and a capillary column (HP5-MS). The carrier gas was helium at a flow rate of 1 ml/min. The GC programme was set for Allium sativum as 10: www.nature.com/scientificreports www.nature.com/scientificreports/ Identification of major terpene compounds of different EOs. Major compounds of each EO were identified based on their area percentage calculated from the GC-chromatogram and mass spectrometry results in reference to NIST standard database 70 . Bioassays of individual major terpene compounds against A. aegypti. Two major compounds from each EO were chosen from GC-MS result and purchased from Sigma-Aldrich having 98-99% purity for further bioassay. Larvicidal and adulticidal efficacy of these compounds against A. aegypti was tested following the methods described above. The most commonly used synthetic commercial larvicide Temephos (Sigma Aldrich) and adulticide Malathion (Sigma Aldrich) were assayed for comparing their efficacy with selected EO compounds following the same procedure. Formulation of terpene compounds. Binary mixtures of selected terpene compounds and terpene compound plus commercial organophosphates (Temephos and Malathion) were prepared mixing LC50 dose of each candidate compound in 1:1 ratio. Prepared combinations were tested against both larval and adult stages of Aedes aegypti following the method described above. Each bioassay was performed in triplicate for each combination and three replicates for the individual compounds present in the respective combination. Mortalities of the target insect were recorded at 24 hours. Expected mortality of the binary mixtures was calculated based on following formula. The effects of each binary mixture were marked as synergistic, antagonistic and no effect based on their calculated χ 2 value following the method described by Pavela 52 . χ 2 value was calculated for each combination using following formula. where, O m = Observed mortality of A. aegypti in response to binary mixtures. E = Expected mortality of A. aegypti in response to binary mixtures. The effect of a combination was designated as synergistic when the calculated χ 2 value was found greater than the table value at respective degrees of freedom at 95% confidence interval and if observed mortality was found greater than the expected mortality. Again, if the calculated χ 2 value for any combination was greater than the table value at definite degrees of freedom but observed mortality was found lower than the expected one, then that treatment was considered as antagonistic. While if in any combination, calculated χ 2 value found less than the table value at respective degrees of freedom then that combination was considered to have no effect. Bioassay of effective combinations in large insect mass. Based on the observed mortality in binary mixtures having synergistic action and the LC50 dose of each terpene compound present in the respective mixture, three to four potential synergistic combinations were selected to test for larvicidal and adulticidal activity against large number of insects (100 larvae and 50 adults) by following the method described above. Along with the mixtures, the individual compound present in the selected mixtures were also tested against same numbers of Aedes aegypti larvae and adults. The proportion of the combination was one part of LC50 dose of one candidate compound and one part of LC50 dose of another constituent compound. In adulticidal bioassay, selected compounds were dissolved in acetone solvent and applied in filter paper wrapped inside a cylindrical plastic vessel with 1300 cm 3 volume. The acetone was evaporated for 10 minutes before releasing adult insects. Again, in case of larvicidal bioassay, LC50 doses of the candidate compounds were first dissolved in equal amount of DMSO and then mixed in 1 liter of water kept in 1300 cm 3 plastic vessel and the larvae were released. statistical analysis. The mortality data recorded were subjected to probit analysis 71 for calculating LC 50 values using SPSS (version 16) and Minitab software.
2019-07-02T14:50:24.008Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "93c23bc4f16b57a773aa4af95c41dc282aba738e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-45908-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93c23bc4f16b57a773aa4af95c41dc282aba738e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
225213143
pes2o/s2orc
v3-fos-license
Model of thermal power plant considering water spray desuperheater for power system analysis It is of great significance to build a model of the water spray temperature reduction system that is more suitable for electrical simulation, and apply it to the existing thermal power plant model for the simulation analysis of today’s complex thermal power plants. Based on reasonable thermodynamic assumptions, the law of conservation of energy and the law of conservation of mass, a transfer function model of a water-jet cooling system suitable for the simulation of a thermal power plant is established by using mechanism modeling. The relationship between these variables and the effect of each variable on the output variables verified the correctness of the model. A steam turbine system considering the influence of the boiler was further established, and a simulation analysis of the operating characteristics was performed, which verified that the influence of the boiler on the dynamic response of the unit is necessary in the long-term stability analysis. Finally, combined with the model established in this paper, a dynamic model of thermal power unit with water spray temperature reduction device was established. The model is simulated with the step disturbance of velocity, the disturbance of cooling water flow and the disturbance of opening door. Based on the simulation results, the action response characteristics in the process are analyzed. It is verified that the model can accurately describe the main steam parameters and power regulation when applied to large-scale power grid transient simulation. Introduction The boiler spray desuperheating device is indispensable in large-scale thermal power units. Its function is to regulate, cool down the superheated and large flow steam flowing through the boiler [1], which is an important guarantee for the safe operation of the boiler and the steam turbine, so that the steam temperature entering the steam turbine is within the normal operation range of the unit. In case of sudden change of unit operation conditions, especially after fast load rejection (FCB) of steam turbine unit, the temperature control of steam generated by boiler has an important connection with coordinated operation of steam turbine [2]. The reliability of the spray desuperheating system not only directly affects the main equipment of the boiler and steam turbine in the thermal power unit, but also the auxiliary equipment, such as the regenerative heater and the steam pump turbine, etc.; moreover, because the spray desuperheating system affects the temperature and pressure of the main steam, the change of the temperature and pressure of the main steam affects the enthalpy of the steam entering the steam turbine, and the change of the corresponding main steam's power generation capacity results in the change of the power generation capacity The unit output fluctuates greatly, which affects the operation and safety of the large power grid associated with the generator [3]. Therefore, it is MEIE 2020 Journal of Physics: Conference Series 1633 (2020) 012029 IOP Publishing doi: 10.1088/1742-6596/1633/1/012029 2 necessary to study more suitable spray desuperheating system model and its operation characteristics in order to study its influence on the operation characteristics of thermal power units [4][5][6]. At present, the boiler spray desuperheater has been deeply studied from the perspective of thermal power engineering at home and abroad. In Reference [7], a modular Simulink library of components such as valves, turbines, and heaters has been developed. In this way, it is possible to easily assemble and customize models able to simulate different plants and operating scenarios. In Reference [8], a spray desuperheating system model is proposed, and the nonlinear mathematical model of each module describing the air temperature system is obtained by using the method of mechanism modeling. Furthermore, in Reference [9], based on the principle of mass and energy balance, the mathematical model and mechanism control model of spray desuperheater are established from the micro and macro levels respectively. In Reference [10], the boiler system model is established and the boiler start-up process is simulated. The simulation results are consistent with the actual process. In Reference [11], a dynamic mathematical model of a 600MW supercritical once through boiler steam generator, which is suitable for simulating the moving boundary of nonlinear lumped parameters under large disturbance conditions, is established to solve the problem of model switching. In the current large power grid transient simulation analysis [12], there is a lack of boiler spray desuperheater and other models, which is difficult to accurately reflect some dynamic characteristics of thermal power system of thermal power units. According to the simulation requirements of different application scenarios, the relevant literature establishes the corresponding simulation model. What's more, in Reference [13], the single reheat turbine model considering the influence of boiler is studied. In Reference [14], considering the steam water cycle process of the whole thermal system, a simulation model of unit reheat condensing steam turbine is established, which can take into account the effect of regenerative system and Reheater on the dynamic response of the unit. In Reference [15], a coordinated control model suitable for the whole process simulation of electromechanical transient and medium and long term dynamic of power system is proposed, which can flexibly simulate the common configuration modes in the boiler control of thermal power plants. In Reference [16], the turbine model with bypass system and the prime mover speed control system model with FCB Function are established. Based on the mechanism modeling method, the mathematical model of spray desuperheater is established firstly. Combined with the dynamic stability simulation requirements of power system, the applicable transfer function is obtained by linearization and Laplace transformation. The model of steam turbine system considering the influence of boiler and spray desuperheater is established and applied to the simulation analysis of dynamic characteristics of thermal power unit. The validity of the model is verified by the simulation and the actual measurement. Dynamic model of spray desuperheating system Spray desuperheater is used to regulate, depressurize and cool down large flow steam. The heat and mass transfer process between desuperheating water and high temperature and high pressure steam can be modeled simply. According to the requirements of power system dynamic simulation modeling, the following assumptions are set: (1) The cross-sectional area of the inlet and outlet of the water spray desuperheater is approximately equal; (2) the superheated steam and desuperheating water are fully and evenly mixed with the same velocity; (3) the mixed steam in the desuperheater only moves in one dimension along the axial direction; (4) the state parameters of the mixture on each cross section of the desuperheater pipe are uniform; (5) The whole system is in adiabatic state; (6) ignoring the axial heat transfer and radial temperature gradient; (7) ignoring the wall heat transfer process. Superheater model. The superheater is a part of the boiler that further heats the steam from the saturated temperature to the superheated temperature. After the saturated steam is heated to the superheated steam, the power capacity of the steam in the steam turbine is improved, that is, Meanwhile, enthalpy drop of the steam in the steam turbine is increased, so the cycle efficiency is improved. The whole process follows the conservation of mass and energy. In Figure 1, V is the volume of steam, ρ2 is the density of steam outlet; D1 and D2 are the mass flow of steam inlet and outlet; h1 and h2 are the specific enthalpy of working medium inlet and outlet; Qin is the heat exchange of working medium and metal pipe wall in the pipe; ρ1, P1 and P2 are the density of working medium inlet, inlet pressure of working medium in pipe and outlet pressure of working medium in pipe; t1, t2, flue gas inlet temperature and outlet temperature outside the pipe; tm is the average wall temperature; Qex and Sex are the heat exchange capacity and heat exchange area of flue gas to metal respectively; mm is the mass of pipe wall. In formulas, τ Time; ξ is the pressure loss coefficient, αex is the convection heat transfer coefficient; kin is the convection heat transfer coefficient between the working medium in the pipe and the pipe wall; t2 is the outlet temperature of the working medium in the pipe; n is the index, usually taken as 0.8; Dg is the mass flow of flue gas outside the pipe; K is the correction coefficient of flue gas heat release; c1 and c2 are the specific heat at the inlet and the specific heat at the outlet respectively; Cm is the specific heat at the pipe wall. The linearization of Formula (1) and the Laplace transform are carried out. The superheater inlet steam flow rate and heat flow rate change are small, which can be ignored in dynamic simulation. The outlet steam temperature is mainly affected by the inlet steam temperature change [15]. Therefore, the transfer function between the simplified steam temperature at the inlet of superheater and the steam temperature at the outlet of superheater is shown in Figure 2. Under different load levels, the steam temperature at the inlet and outlet of the superheater changes little, so K1 is close to 1 as it is the c ratio of flue gas inlet vs outlet, which does not change much; T1 is related to the mass, specific heat, steam flow and specific heat at constant pressure of the metal on the tube wall of the superheater. Due to the relatively small change of other relevant parameters, when the load increases, D1 increases, thus T1 decreases. Model of spray desuperheating section. Spray desuperheating process is to spray desuperheating water after atomization directly into superheated steam generated by superheater [1], as shown in Figure 3, spray desuperheating is a heat exchange process of two-phase mixing, which shall meet energy conservation and mass conservation. In Figure 3, D0 is the inlet steam mass flow; DW is the inlet desuperheating water mass flow; D is the outlet steam mass flow; hW, h0 and h are the inlet desuperheating water specific enthalpy, the inlet steam specific enthalpy and the outlet steam specific enthalpy respectively; tW and t0 are the inlet desuperheating water temperature and the inlet steam temperature respectively. In Formula (2), V is the volume of spray desuperheater; ρ is the density of outlet steam; tm, cm, mm and Qo are the average wall temperature, specific heat capacity of wall, metal mass of wall and heat release of steam in unit time; QW is the heat absorption of desuperheating water in unit time. Linearize Equation (2) Figure 5. According to the principle of energy balance and mass balance, the fuel quantity and feed water of the boiler are properly proportioned to make the heat of the feed water entering the evaporator meet the requirements of the normal operation of the boiler [13]. Turbine model considering boiler influence 2.2.1. Once through boiler model. The once through boiler model is shown in In Figure 5, UW is the water supply and Ub is the coal supply; Ta is the delay time of fuel heat release; Hwb is the gain of water supply flow; Keva is the gain of heat release; Teva is the delay time of steam in evaporator; Heva is the gain of steam flow out of evaporator; Tt is the delay time of steam flowing through superheater and other pipes; Ka, Kb and KC are the proportion coefficient. Figure 6. A natural power overshoot coefficient of high pressure cylinder is introduced to reflect the physical phenomenon that the output proportional coefficient of high pressure cylinder in the dynamic process is larger than that in the steady state when the regulating valve suddenly opens. Turbine model. The turbine model is shown in In Figure 6, FHP, FIP, FLP is he power ratio of high, medium and low pressure cylinders respectively [17], and FHP+FIP+FLP=1, TCH is the steam volume time constant; TRH is reheater time constant; TCO is the crossover valve time constant. By combining Figure 5 with Figure 6, a turbine model considering the influence of boiler is established. 2.2.3. Model of turbine speed control system considering spray desuperheating system. Combining the above-mentioned spray desuperheating system model with the steam turbine model considering the influence of boiler, the boiler turbine system simulation model is established, and its structure is shown in Figure 7. In Figure 7, feed water and coal are adjusted at an appropriate rate to ensure material conservation and energy conservation, so as to stabilize steam pressure and steam temperature. After the given value is determined, the deviation is sent to the regulator for calculation according to the system feedback signal, and then the instruction is given to the actuator. When the system frequency changes, the speed of the grid connected generator will change. Generally speaking, the turbine regulating system will work, change the steam inflow of the turbine, and adjust the output power of the generator to meet the needs of the load. Figure 7. Boiler-turbine system simulation model. (1) Simulation of desuperheating water disturbance This article uses simulink simulation software for simulation.At 20s, the desuperheating water flow is reduced from 30kg/s step to 15kg/s, and other inputs remain unchanged. The simulation and measurement results are shown in Figure 8. Figure 8 shows that the simulation result and the actual measurement data change process and steady state value are roughly the same, indicating that the established spray desuperheating system model is accurate and effective. Generally speaking, when the flow of desuperheating water decreases, the enthalpy and temperature of outlet steam increase, the flow of outlet steam decreases, and the temperature and enthalpy of outlet steam can be stabilized again within 20s. Simulation analysis of operation characteristics of steam turbine system considering the influence of boiler The single reheat steam turbine model considering the influence of boiler formed in combination of Figure 5 and Figure 6 is respectively simulated in the upper load step and the lower load step, and compared with the measured results and original simulation results without considering the influence of boiler, as shown in Figure 9. Step experiment on load. As can be seen from Figure 9, after the step command is issued on the load, the electromagnetic power is increased, and the main steam pressure is decreased; after the step command is issued, the electromagnetic power is decreased, and the main steam pressure is increased. The simulation curve obtained from the steam turbine model considering the influence of the boiler is close to the measured curve, which can more accurately reflect the mid-to-long-term operating characteristics of the thermal power unit. To sum up, it is necessary to consider the influence of the boiler on the dynamic response of the unit in the long-term stability analysis. Simulation analysis of influence of spray desuperheating system on dynamic characteristics of steam turbine system Based on Figure 7, a boiler turbine system model is established, and its dynamic response characteristics are simulated and analyzed. (1) Speed step disturbance The given value of turbine speed changes from 0.8 to 0.85, 0.9 and 0.95 respectively, and the dynamic response simulation results are shown in Figure 10. It can be seen from Figure 10 that when the given speed changes, the main steam flow will be adjusted to the required value within a certain period of time, so as to regulate the output power and speed of the turbine at a faster speed, and finally meet the demand. (2) Desuperheating water flow disturbance Set the flow of desuperheating water to reduce by 20%, 40% and 60% respectively, and other inputs have not changed. The dynamic response of the system is shown in Figure 11. It can be seen from Figure 11 that when the desuperheating water flow is greatly reduced, the main steam flow will be correspondingly reduced, but the reduction is not significant. The spray desuperheating system has little influence on the main steam parameters, so it can only adjust and
2020-10-28T18:44:21.541Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "03b7c81c7b8031e0e8ca31ed6c60d1cae39d9ed0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1633/1/012029", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "12bb4c0c7ca0b79f76f4301bba49463082bd5cb1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
3989581
pes2o/s2orc
v3-fos-license
Developing gene-tagged molecular markers for evaluation of genetic association of apple SWEET genes with fruit sugar accumulation Sugar content is an important component of fruit quality. Although sugar transporters are known to be crucial for sugar accumulation, the role of genes encoding SWEET sugar transporters in fruit sugar accumulation remains elusive. Here we report the effect of the SWEET genes on fruit sugar accumulation in apple. A total of 25 MdSWEET genes were identified in the apple genome, and 9 were highly expressed throughout fruit development. Molecular markers of these 9 MdSWEET genes were developed and used for genotyping of 188 apple cultivars. The association of polymorphic MdSWEET genes with soluble sugar content in mature fruit was analyzed. Three genes, MdSWEET2e, MdSWEET9b, and MdSWEET15a, were significantly associated with fruit sugar content, with MdSWEET15a and MdSWEET9b accounting for a relatively large proportion of phenotypic variation in sugar content. Moreover, both MdSWEET9b and MdSWEET15a are located on chromosomal regions harboring QTLs for sugar content. Hence, MdSWEET9b and MdSWEET15a are likely candidates regulating fruit sugar accumulation in apple. Our study not only presents an efficient way of implementing gene functional study but also provides molecular tools for genetic improvement of fruit quality in apple-breeding programs. INTRODUCTION Sugar is the main carbon source and energy-supplying substance in organisms, and it plays an important role in plant growth and development. Sugar is the main product of photosynthesis and carbon dioxide assimilation occurs mainly in stromal cells of chloroplast. The main assimilation product synthesized in chloroplasts is triose phosphate, most of which will be converted to either sucrose in the cytosol or starch in the chloroplast 1,2 . Sucrose is commonly translocated to other carbondemanding organs through the long-distance transport occurring in phloem. Hence, sugar transportation is critical for maintaining the source-sink balance 3 . Over the past two decades, various sugar transporters have been identified in living organisms, including plants, animals, humans, and fungi [4][5][6] . These transporters can be categorized into four families: sodium solute symporter transporters; major facilitator superfamily transporters; phosphotransferase system transporters; and sugar will eventually be exported transporters (SWEETs). Among these transporters, SWEETs emerge as a unique and novel class of sugar transporters. SWEETs are conserved evolutionarily and exist widely in eukaryotes, and prokaryotes such as arechaea and eubacteria. The first member of the SWEET gene family, MtN3, was identified in Medicago truncatula, and its expression was induced during root nodule development 7 . Later, a homolog of the MtN3 gene, designated Saliva, was identified in Drosophila and displayed salivary gland-specific expression during embryonic development 8 . Recently, MtN3/Saliva-type genes were functionally characterized as sugar transporters in both animals and plants, and thus gave the name "SWEET" 9 . SWEET proteins are characterized by the MtN3/Saliva motif (also known as the PQ-loop repeat), which comprises three alpha-helical transmembrane domains . Eukaryotic SWEETs consists of a tandem repeat of the basic 3-TM unit separated by a single transmembrane domain, which constitutes a 3-1-3 TM structure. In contrast, prokaryotic SWEETs, also called SemiSWEETs, contain only a single 3-TM unit, suggesting that eukaryotic SWEETs evolved through a duplication of the 3-TM unit 10 . In plants, SWEETs function as bidirectional uniporters that mediate influx and efflux of sugars across cell membranes. SWEETs can be divided into four clades 11 . Clades I, II, and IV SWEETs transport predominantly hexoses, whilst clade III SWEETs appear to be sucrose transporters (SUTs) [12][13][14][15] . For example, the clade I SWEET AtSWEET1 and the clade II SWEET OsSWEET5 mediate the uptake and efflux of glucose or galactose, respectively, across the cell membrane 9,16 . Two clade III SWEETs, AtSWEET11 and AtSWEET12, are involved in efflux of photosynthesized sucrose from phloem parenchyma cells into intercellular space for phloem loading and longdistance translocation of sucrose 17 . A newly identified clade III SWEET, AtSWEET9, transports sucrose to the apoplast and plays an essential role in nectar secretion 18 . The clade IV SWEET, AtSWEET17, functions as a fructose-specific uniporter, with a key role in facilitating bidirectional transport of fructose across the tonoplast in leaf and root 12,13 . In addition, SWEETs have been shown to affect various physiological processes, such as pollen development 19,20 , seed filling 21,22 , stress and senescence 14,16,23,24 , modulating gibberellins response 25 , and host-pathogen interaction 26 . Usually, abiotic stress such as cold, high salinity, and drought, and biotic stress caused by fungi or bacteria result in an induction of specific SWEET genes 27,28 . The accumulation of carbohydrates in storage organs such as seeds and fruits mainly depends on the supply of photoassimilates from photosynthetic tissues, especially source leaves. In most plants, sucrose is the major carbohydrate transported over a long distance in the veins to support the growth and development of storage organs 6 . Sugar transporter genes exhibit divergent evolutionary patterns and play important roles in sugar accumulation in plants 29 . For example, genes encoding SUT proteins are involved in sugar translocation toward storage organs [30][31][32] . Recently, the SWEET4 gene is also found to play an important role in seed filling 22 . In maize and rice, SWEET4 shows high expression during seed development and contributes to seed filling by enhancing the importation of hexoses into the endosperm. This study sheds light on our understanding on the mechanism by which sucrose is released from maternal tissues such as seed coat to support filial tissues such as embryonic tissue. Since sugar content is an important component of fruit quality, increasing attention has also been paid to investigate the SWEET gene family in fruit crops such as apple 33 , grapevine 34 , and banana 35 . However, the effect of SWEETs on fruit sugar accumulation remains elusive. The domesticated apple, Malus x domestica Borkh., is an economically important fruit crop worldwide. Apple belongs to the family Rosaceae, and the cultivated apple is a diploidized autopolyploid species with a basic chromosome number of x = 17. The draft genome sequence of the domesticated apple has been released, and accounts for an approximately 750 Mb per haploid 36 . In this study, we report the identification of the MdSWEET genes with high expression during apple fruit development. DNA markers for these MdSWEETs were developed to investigate their association with fruit sugar content. Our study not only aids our better understanding of the effect of SWEETs on fruit sugar accumulation but it will also be helpful for genetic improvement of fruit sugar accumulation in apple-breeding programs. Plant material All 188 apple cultivars (Table S1) used in this study that show a great variation in fruit sweetness are maintained at Xingcheng Institute of Pomology of the Chinese Academy of Agricultural Sciences, Xingcheng, Liaoning, China. Young leaves used for genomic DNA extraction were collected in the spring of 2015. Leaf samples were immediately frozen in liquid nitrogen, and then stored at −80°C until use. Fruits at mature stage were randomly collected in 2015 and fruit maturity was comprehensively estimated based on skin background color and blush development, seed color turning into brown, and the previous records of maturity date. Each cultivar had three replicates, consisting of nine fruits. Fruit samples were cut into small pieces, immediately frozen in liquid nitrogen, and then stored either at −40°C for sugar measurement or at −80°C for real-time PCR (RT-PCR) analysis. Measurement of soluble sugar content The content of soluble sugar components was measured using high-performance liquid chromatography (HPLC) according to our previous report 37 . Briefly, fruit samples were ground into fine powder in liquid nitrogen using an A11 basic Analytical mill (IKA, Germany). One gram of powder was dissolved in 6 ml sterilized deionized water, the mixture was extracted in an ultrasonic bath for 30 min, and then centrifuged at 6000 r/s for 15 min. The supernatant was collected, purified using a SEPC18 syringe (Supelclean ENVI C18 SPE), and subsequently filtered through a 0.22 μm Sep-Pak filter. The filtered supernatant was used for sugar content measurement using a Dionex P680 HPLC system (Dionex Corporation, CA, USA) equipped with a refractive index detector (Shodex RI-101; Shodex Munich, Germany). The separation was performed on a Transgenomic COREGET-87C column (7.8 mm × 300 mm, 10 μm) together with a guard column Transgenomic CARB Sep Coregel 87C. The column temperature was maintained at 85°C by using a Dionex TCC-100 thermostated column compartment. The mobile phase was set at a flow rate of 0.6 ml/min with degassed, distilled, deionized water. Peak areas were integrated with the Chromeleon chromatography data system according to external standard solution calibrations (reagents from Sigma Chemical Co., Castle Hill, NSW, Australia). Sugar concentrations were expressed on a fresh weight (FW) basis, and total sugar content was indicated by the amount of four sugars found in apple fruit, i.e., glucose, sucrose, fructose, and sorbitol. In addition, the measurement of soluble solid content (SSC) was conducted using a pocket refractometer (Atago, Tokyo, Japan). Identification of the SWEET genes in apple and their phylogenetic analysis Coding DNA sequences of the SWEET gene family in Arabidopsis thaliana were retrieved from the Arabidopsis Information Resource (TAIR, http://www.arabidopsis.org/). These coding sequences were used as query sequences to compare against the apple genome sequence database (GDDH13 V1.1, https://www.rosaceae.org/blast/) by BlastX with a cutoff E-value of 1.00E-10. The homologs of Arabidopsis SWEET genes in the apple genome were named according to their phylogenetic relationships to the founding members of the family in Arabidopsis 9 . Chromosome lengths and gene locations were presented according to the draft genome sequence of the doubled haploid (GDDH13 V1.1) 38 . Multiple alignment of amino-acid sequences of SWEET genes in Arabidopsis and apple was conducted using the integrated MUSCLE alignment program in MEGA5 (Molecular Evolutionary GenetiMd Analysis) with default parameters 39 . The resulting data matrix was analyzed using the Neighbor-Joining method. The bootstrap consensus tree was inferred from 1000 replicates and the bootstrap values <50% were collapsed. RNA isolation and quantitative RT-PCR Two apple cultivars, K9 and Shizishan 2, were randomly selected for quantitative RT-PCR (qRT-PCR) analysis. Fruit samples were collected at 30, 60, and 90 days after full bloom. Each cultivar had three biological replicates, containing of nine fruits. Fruits of each replicate were cut into small pieces, mixed, and used for total RNA extraction. RNA extraction was conducted using RNA prep Pure Plant Kit (TianGen, Beijing, China) according to the manufacturer's instructions. RNA concentration and quantity were detected and assessed with NanoDrop2000 (Thermo Scientific). Approximately 1 µg of total RNA was used to synthesize the first strand of cDNA using TransScript One-Step gDNA Removal and cDNA Synthesis SuperMix (TRANS, Beijing, China) following the manufacturer's instructions. qRT-PCR was performed in 20 µL reaction containing 1× SYBR Green II Master Mix (Takara, Dalian, China), 0.2 µM of each primer, and 0.5 µL of template cDNA. The qRT-PCR amplifications were performed using the Applied Biosystems 7500 Real-Time PCR System (Applied Biosystems, USA), and the reaction program was set as follows: 95°C for 1 min, one cycle; and 95°C for 5 s, 60°C for 34 s, 40 cycles. Melting curve analysis was performed at the end of 40 cycles by heating from 55 to 95°C at a rate of 0.5°C/s. An apple polymer ubiquitin enzyme gene (UBQ) was selected as a constitute control 40 . The relative expression level of all detected genes was calculated according to the cycle threshold 2 −ΔΔCT method. All analyses were performed in triplicates. The primer sequences are listed in Table S2. Development of gene-tagged simple sequence repeat and cleaved amplified polymorphism sequence markers Simple sequence repeat (SSR) or cleaved amplified polymorphism sequence (CAPS) markers were development for SWEETs that showed high expression in fruit of apple. SSR Hunter software and dCAPS Finder 2.0 (http://helix.wustl.edu/dcaps/dcaps.html) were used to screen SSRs with ≥7 repeats or CAPS markers, respectively, in genomic DNA sequences of each SWEET gene, including 2 kb upstream of the start codon, the entire coding region, and 2 kb downstream of the translation stop codon. Primers flanking the SSR and CAPS loci were designed using the Primer 5 program, and polymorphism of the SSR and CAPS markers was evaluated using four apple cultivars with high sugar content and four cultivars with low sugar content. PCR amplification was performed using the GeneAmp PCR System 9700 (ABI, USA) with the following condition: 3 min at 94°C, followed by 35 cycles consisting of 94°C for 30 s, 60°C for 30 s, 72°C for 30 s, and with a final extension of 72°C for 7 min. For the SSR markers, 3 μL of amplification products was mixed with an equal volume of formamide loading buffer (98 % formamide, 10 mM EDTA, pH 8.0, 0.025% bromophenol blue, and xylene cyanol). The mixture was denatured at 95°C for 5 min, and then immediately put on ice for 5 min. An aliquot of 2 μL mixture was on an 8% polyacrylamide gel and electrophoresed for 1-1.5 h at 1000 V. DNA bands were visualized after silver staining, and their sizes were estimated on the basis of a standard 25 bp DNA ladder. For the CAPS markers, amplification products were digested with restriction enzymes and then separated on 2% agarose gel. DNA bands in agrose gel were visualized under ultraviolet light after staining with ethidium bromide. Statistical analysis A total of 188 apple cultivars were subjected to statistical analysis. Each cultivar was genotyped for the SWEET loci following analysis of molecular marker profiles. The detection of association between molecular markers and fruit sugar accumulation was performed with the software package TASSEL version 3.0 according to our previous report 41 . The criterion for marker-trait association was set at P ≤ 0.01. Fisher's least significant difference at P < 0.01 was used to compare mean soluble sugar contents between cultivars. RESULTS The SWEETs gene family in the apple genome A total of 25 MdSWEET genes were identified in the apple genome. Of the 25 MdSWEET genes, 24 were located on five homologous pairs of chromosomes (3-11, 5-10, 4-12, 6-14, 13-16), and 1 on chromosome 17 (Fig. 1a). Genomic structural analysis showed that the majority of MdSWEET genes consisted of six exons, while 4 MdSWEET genes, MdSWEET5b, MdSWEET7a, MdSWEET7b, and MdSWEET11, contained five exons ( Fig. 1b and Table S3). The open reading frames of the MdSWEET genes ranged from 645 to 1020 bp in length and their deduced proteins ranging from 215 to 340 amino acids in length (Table S3). The conserved domain prediction indicated that 21 MdSWEET genes had seven alpha-helical TMs. By contrast, 3 MdSWEET genes, MdSWEET5a, MdSWEET9a, and MdSWEET5b, had six TMs, with absence of the TM7 domain (Table S3). Interestingly, the remaining MdSWEET11 gene had eight TMs. In addition, it is worth noting that an additional SWEET gene (GDR accession no. MD01G1215700) was also found in the apple genome. This SWEET gene was located on chromosome 1, with four exons (Fig. 1b), contained only one MtN3/Saliva motif, and showed extremely low expression throughout fruit development. Hence, this SWEET gene was deemed a pseudogene and was not included in the later analysis. Expression profiling of MdSWEET genes in fruit at different developmental stages To identify SWEET genes that are potentially involved in fruit sugar accumulation, we investigated the expression profiling of the MdSWEET genes in fruits of two cultivars, K9 and Shishan 2, at different developmental stages, including juvenile, expanding, and mature stages (Fig. 3). Of the 25 MdSWEET genes, 16 showed extremely low expression throughout fruit development, with relative expression levels ranging from 0 to 0.1. In contrast, 9 MdSWEET genes, MdSWEET2a, MdSWEET2b, MdSWEET2d, MdSWEET2e, MdSWEET7a, MdSWEET7b, MdSWEET9b, MdSWEET12a, and MdSWEET15a, were highly expressed in fruits at all the three stages tested. Hence, these 9 MdSWEET genes were assumed to be potential candidates related to fruit sugar accumulation, and were further subjected to develop gene-tagged markers for evaluation of their genetic association with fruit sugar content. Development of gene-tagged markers for SWEETs with high expression in fruit and their polymorphisms in a collection of apple cultivars Two types of molecular markers, SSR and CAPS, were developed for the nine MdSWEET genes mentioned above ( Table 1). The SSR markers for two SWEET genes, MdSWEET7a and MdSWEET9b, were developed based on a (CT) n microsatellite located in the second intron, whereas dinucleotide microsatellites, including (AG) n , (GA) n , (AT) n , (CT) n , and (TA) n , located upstream of the start codon were used to develop SSR markers for six SWEET genes, MdSWEET2a, MdSWEET2b, MdSWEET2d, MdSWEET2e, MdSWEET7b, and MdSWEET12a (Table 1). A T/C single-nucleotide polymorphism (SNP) in the first intron of MdSWEET15a was successfully used to develop a CAPS marker. PCR products harboring a "C" nucleotide at the polymorphic site could be digested with the NdeI enzyme, producing two fragments of 438 and 203 bp in size, while no digestion for PCR products harboring a "T" nucleotide. These nine gene-tagged markers were subsequently used to screen a collection of 188 cultivars (Fig. 4). As a result, two alleles at each polymorphic locus were detected for three MdSWEET genes, MdSWEET2a, MdSWEET12a, and MdSWEET7a, while three alleles at each polymorphic locus were observed for five MdSWEET genes, MdSWEET2b, MdSWEET2d, MdSWEET2e, MdSWEET7b, and MdSWEET9b. Six genotypes derived from the random combination of three alleles were identified at each of the four gene loci, including MdSWEET2b, MdSWEET2d, MdSWEET2e, and MdSWEET7b, while only four genotypes were detected at the MdSWEET9b locus (Table S4). Three genotypes at each polymorphic locus were identified for three MdSWEET genes, MdSWEET2a, MdSWEET7a, and MdSWEET12a. By contrast, the CAPS marker revealed Association between MdSWEET genes and fruit sugar accumulation in apple SSC and soluble sugar content in mature fruit for all the tested cultivars are listed in Table S1 and their distributions are shown in Fig. 5. SSC and the contents of sucrose, fructose, glucose, and total sugar components showed a normal distribution, whereas the distribution of sorbitol content was skewed toward low sorbitol contents. A wide range was observed for the concentration of various sugar components, including sucrose (3.8-41.75 mg/g FW), glucose (3.86-31.80 mg/g FW), fructose (30.48-83.82 mg/ g FW), sorbitol (0.14-22.62 mg/g FW), and total sugar components (47.93-142.40 mg/g FW), with an average of 21.65, 14.98, 49.07, 3.90, and 89.60 mg/g FW, respectively. SSC ranged from 7.96 to 20.02, with an average of 12.76. Overall, the apple cultivar collection has a great variation in fruit sugar content, which suggests that they are suitable for investigating genetic association of MdSWEET genes with fruit sugar accumulation. Candidate gene-based association mapping was further performed to investigate association between the polymorphic loci of MdSWEET genes and fruit sugar accumulation in apple. As a result, three genes, MdSWEET2e, MdSWEET9b, and MdSWEET15a, showed a significant association with fruit sugar accumulation, whereas no significant association was observed for the remaining six genes, MdSWEET2a, MdSWEET2b, MdSWEET2d, MdSWEET7a, MdSWEET7b, and MdSWEET12a (Table 2). Based on the presence or absence of the (AT) 13 allele on the MdSWEET2e locus, all the tested cultivars were grouped into three genotypes, (AT) 13/13 , (AT) 13/7 or 17 , and (AT) 7 or 17/7 or 17 . The (AT) 13/13 genotype had significantly higher than both (AT) 13/7 or 17 and (AT) 7 or 17/7 or 17 genotypes for the sucrose, fructose, or total sugar content (Fig. 6). Similarly, all the cultivars were divided into three genotypes, (CT) 19/ 19 , (CT) 19/23 , and (CT) 23/23 or 26, based on the presence or absence of the (CT) 19 allele on the MdSWEET9b locus. Cultivars with two (CT) 19 alleles had significantly higher than cultivars with one or no (CT) 19 allele for the fructose or total sugar content. Based on the polymorphic locus of MdSWEET15a, all the cultivars were assigned to two genotypes, T/T and T/C. The T/T genotype had significantly higher than the T/C genotype for SSC and the glucose, sorbitol, or total sugar content. However, the (AT) 13 allele of MdSWEET2e and the (CT) 19 allele of MdSWEET9b had no effect on the soluble solids, glucose, or sorbitol content and the soluble solids, sucrose, glucose, or sorbitol content, respectively (data not shown). Similarly, the polymorphic locus of MdSWEET15a had no effect on accumulation of sucrose and fructose. Of the above three genes associated with fruit sugar accumulation, MdSWEET15a had relatively higher contributions to the observed phenotypic variation, and accounted for 6.4%, 6.8%, 5.7%, and 8.4% of the observed phenotypic variation in the soluble solids, glucose, sorbitol and total sugar content, respectively. The MdSWEET9b gene accounted for 6.6% and 2.5% of the observed phenotypic variation in the fructose and total sugar content, respectively. By contrast, MdSWEET2e had lower contribution, accounting for 0.7%, 2.7%, and 3.6% of the observed phenotypic variation in the sucrose, fructose, and total sugar content, respectively. Taken together, all the above results suggest that one clade II SWEET gene, MdSWEET2e, and two clade III SWEET genes, MdSWEET9b and MdSWEET15a, are genetically associated with sugar content in mature apple fruit, with MdSWEET15a and MdSWEET9b showing relatively higher contribution. DISCUSSION Sugar transporters play a crucial role in plant growth and development as they mediate sugar uptake or release from cells or subcellular compartments 21 17,22,42 . Although preliminary analyses of the SWEET gene family have been reported in several fruit crops [33][34][35] , little is known about the role of SWEETs in fruit sugar accumulation. In this study, we report for the first time genetic association of the SWEET genes with fruit sugar content in apple. Our study indicates that developing gene-tagged markers is an efficient way to investigate gene functionality, and the gene-tagged markers can be directly used in breeding programs if they are associated with horticultural traits of interest. Duplication of SWEETs in the apple genome Gene duplication is a major driving force for recruitment of genes in plants. Apple is diploid, with an autopolyploidization origin 36,43 . Here our study reveals 25 SWEETs in the apple genome, which is inconsistent with a previous report that identifies a total of 29 apple SWEETs 33 . This inconsistency is likely due to the fact that our analysis is based on of the draft genome sequence of "Golden delicious" doubled-haploid tree (GDDH13 V1.1) 38 , while the genome sequence of apple cv. Golden Delicious 36 was used in the study reported by Wei et al 33 . All the MdSWEET genes except MdSWEET8 are located on homologous pairs of chromosomes. For example, chromosomes 6 and 14 are homologous pairs and both contain one copy of each of the four SWEET genes, MdSWEET7, MdSWEET12, MdSWEET10, and MdSWEET5. The MdSWEET9 gene has two homologs, which are separately located on the bottom of two homologously paired chromosomes 4 and 12. Chromosomes 5 and 10 are homologous pairs and both contain four MdSWEET genes. These results unambiguously demonstrate that duplication of SWEET genes is related to whole-genome duplication (WGD). Chromosome 3 is homologous to chromosome 11. Chromosome 3 contains a single SWEET gene MdSWEET2a on the bottom chromosome, whilst a cluster of two SWEET2 genes, MdSWEET2b and MdSWEET11, are found on the bottom of chromosome 11. Similarly, chromosomes 13 and 16 are homologous pairs. A cluster of two SWEET2 genes, MdSWEET15b and MdSWEET17, are found on the top of chromosome 13, whilst only single SWEET gene MdSWEET15a located on the homologous region of chromosome 16. These results also suggest that tandem duplication of SWEET genes has probably occurred on chromosomes 11 and 13. In addition, it is worth noting that one SWEET gene, MdSWEET8, is located on the bottom of chromosome 17, but no SWEET gene was found on chromosome 9 that is homologous to chromosome 17. Since duplicated gene copies following WGD are prone to be rapidly lost 44 , it is reasonable to speculate that a SWEET gene on chromosome 9 may have been lost in the ancestor of apple. Most WGD-derived duplicated genes are prone to diverge in expression 45 . This case is also detected for the apple SWEET genes. For example, two SWEET homologs, MdSWEET15a and MdSWEET15b, which are located on homologous pair of chromosomes 16 and 13, respectively, have undergone divergence in expression. MdSWEET15a is highly expressed throughout fruit development, whilst MdSWEET15b shows no expression in fruit. Similarly, MdSWEET9a and MdSWEET9b are located on homologous pair of chromosomes 12 and 4, respectively. MdSWEET9b shows high expression throughout fruit development, but MdSWEET9a with no expression in fruit. In addition, expression divergence was also observed for tandem duplicated SWEET genes. For example, MdSWEET2b and MdSWEET11 are clustered on the bottom of chromosome 11. MdSWEET2b is highly expressed throughout fruit development, but MdSWEET11 shows no expression in fruit. However, it is unclear whether or not the remaining 16 MdSWEET genes with extremely low expression in fruit have also diverged in expression. Taken together, all the results above suggest that the SWEET genes in apple have undergone polyploidization and/or segmental duplication during the process of speciation, and some duplicated SWEET genes have diverged in expression. Candidate MdSWEETs involved in the regulation of fruit sugar accumulation in apple Measurement of soluble sugar content in mature fruits of 188 apple cultivars reveals that the average concentration of fructose is over twofold higher than those of sucrose, glucose, and sorbitol, which confirms our previous report of fructose being the major sugar component in mature apple fruit 37 . Genotyping of the apple cultivar collection using nine gene-tagged molecular markers further indicates that three MdSWEET genes, MdSWEET2e, MdSWEET15a, and MdSWEET9b, are associated with fruit sugar accumulation. Six genotypes produced by three alleles at single polymorphic locus were detected for four SWEET genes, MdSWEET7b, MdSWEET2d, MdSWEET2b, and MdSWEET2e, and three genotypes arisen from two alleles at single polymorphic locus were observed for three SWEET genes, MdSWEET12a, MdSWEET2a, and MdSWEET7a. In contrast, the (CT) 26/26 and (CT) 19/26 genotypes and the C/ C genotype were not found at the MdSWEET9b or MdSWEET15a loci, respectively, in the apple cultivar collection. Since selection is well-known to be a directional process that results in changes in the frequency of various genotypes in the population, the MdSWEET15a and MdSWEET9b loci have probably undergone selection during the process of apple domestication. Linkage mapping of quantitative trait loci (QTLs) for sugar content have revealed many QTLs with minor effects on all linkage groups (LGs) except LG7 and LG17 [46][47][48][49] . MdSWEET15a is located on the region of LG16, which contains several QTLs for Brix and the amount of sorbitol and fructose 48,50 . Similarly, MdSWEET9b is located on the region of LG4 harboring a QTL for individual sugar content 49 . By contrast, MdSWEET2e is located far away (approximately 11.2 Mb) from the region of LG10 harboring a QTL for Brix and sucrose content. In addition, MdSWEET2e has a relative smaller contribution to phenotypic variation in sugar content compared with both MdSWEET15a and MdSWEET9b. Taken together, all these results above suggest that MdSWEET15a and MdSWEET9b are likely candidates involved in sugar accumulation in apple fruit. In peach, two major QTLs for fruit sugar content have been reported on the top of LG4 and the bottom of LG5, respectively 51 . Interestingly, these two QTL regions both contain SWEET genes, based on the draft genome sequence of peach cv. Lovell 50 . The two QTL regions on peach LG4 and LG5 correspond to syntenic blocks on apple LG3 and LG6, respectively, which harbor QTLs for sugar content 52 . Thus, it seems that SWEET genes may also function as candidates for sugar accumulation in peach and other Rosaceae fruit crops. Both MdSWEET9b and MdSWEET15a belong to the clade III SWEETs that are proved to be efficient SUTs 11 . In this study, the cultivars with the T/C genotype at the MdSWEET15a locus have higher average sucrose content in mature fruit compared with those with the T/T genotype. Similarly, the cultivars with two (CT) 19 alleles have higher average sucrose content in mature fruit than those with one or no (CT) 19 allele. Thus, the clade III SWEET genes in apple, similar to clade III SWEETs in Arabidopsis 9 , may be involved in sucrose transportation. However, the difference in average sucrose content between various genotypes in the MdSWEET9b or MdSWEET15a loci does not reach a significance level (P < 0.01). This might be partially due to the reason that sucrose in the vacuolar has a trend to conversion into hexoses, resulting in fructose being the major sugar component in mature apple fruit 37 . Besides MdSWEET15a and MdSWEET9b, two additional SWEET genes, MdSWEET2a and MdSWEET2b, are also located on the regions of LGs 3 and 11, respectively, which contain QTLs for fruit sorbitol content in apple 53 . However, these two MdSWEET genes both have no significant association with sorbitol content. This could be attributed to certain process of sugar metabolism in fruit, where sorbitol unloaded from leaves into the fruit is converted to fructose or glucose 54 . It is worth noting that gene-tagged markers developed in this study may not correspond to functional variants that account for associations with sugar content. Thus, further studies are still needed to ascertain whether MdSWEET2a and MdSWEET2b have an influence on fruit sugar accumulation in apple. Marker-assisted selection (MAS) is a valuable tool in breeding programs of plants, particularly fruit crops. In this study, a T/C SNP in the first intron of MdSWEET15a accounts for 6-8% of phenotypic variation for SSC (6.4%) and total sugar content (8.4%) among apple germplasm, while a (CT) n SSR locus in the second intron of MdSWEET9b explains approximately 7% of phenotypic variation for the concentration of fructose, the major sugar component in apple fruit. Since these two genetagged makers account for considerable phenotypic variation, they can serve as efficient tools for genetic improvement of fruit sweetness in apple-breeding programs with MAS. In summary, our study suggests that both MdSWEET9b and MdSWEET15a are likely candidates regulating fruit sugar accumulation in apple. It is worthy of study in the future to clarify the functions of MdSWEET9b and MdSWEET15a.
2018-03-20T13:13:09.562Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "8bf85807b9911206a9c52fd67ad4dd1f7eb990fd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41438-018-0024-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58a8fbbe3d6254a50ce1c744f90940dcde791d9e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238007524
pes2o/s2orc
v3-fos-license
THE COMPARISON OF PRICING METHODS IN THE CARBON AUCTION MARKET VIA MULTI-AGENT Q-LEARNING . In this paper, the uniform price and discriminative price methods are compared in the carbon auction market using multi-agent Q-learning. The government and different firms are considered as agents. The government as auctioneer allocates initial permits in the carbon auction market, and the firms as bidders compete with each other to obtain a larger share of the auction. The carbon trading market, penalty, reserve price, and bidding volume limitation are considered. The simulation analysis demonstrates that bidders have different behavior in two pricing methods under different amounts of carbon permits. In the uniform price, the value of bidding volume, firms’ profit, and the trading volume for low permits and the value of the government revenue, clearing price, the trading price, and auction efficiency for high permits are greater than ones in the discriminative price method. Bidding prices have a higher dispersion in the uniform price than the discriminative price method for different amounts of carbon permits. There are several types of auctions that are mainly divided into two groups: static (sealed) and dynamic (clock) [28]. In a static auction, two market pricing methods, i.e., uniform pricing and discriminatory pricing, have been developed. In the uniform price method, each winner pays the market clearing price (the price at which the aggregate demand curve intersects the supply curve), while in the discriminatory price method, they pay their bid prices [12,13]. Selecting the right pricing method for a carbon market is still a hot debate. For example, Cong and Wei [11] indicate when carbon allowances are nearly low, the discriminative price method is more efficient than the uniform price, while little participants benefit more in the other method. Cong and Wei [12] show when the number of participants is relatively high, and there is no communication between them, the English auction clock is more effective than the uniform and discriminatory pricing. But when the number of bidders is relatively low and there is communication between them, discriminatory price auction is better than two other methods and prevents collusion. Tang et al. [28] demonstrate the uniform price method has a smaller effect on economic damage and emission reduction compared to the discriminatory price method. Many studies, like Santos et al. [7], Hattori and Takahashi [16], Sugiyarto [27], Akbari-Dibavar et al. [1], and Matthäus [20], have been conducted to compare these two pricing methods in different areas, but their results are not consistent. However, there are few studies that have compared these methods in the carbon auction market. Generally, in this area, various methods have been used to analyze the behavior of participants. For instance, Jiang et al. [19] investigate how market power can influence the auction price using mathematical models and some operations like derivative. They consider two allocations patterns, mixed allocation, and single auction. They examine the effect of these two patterns on compliance costs and welfare. Cong and Wei [12] utilize experimental method to compare some carbon auction methods. Dormady [14] provides some experiments to simultaneously investigate a carbon and energy market under real-world market characteristics. Due to the lack of auction data and the nature of its gameplay, individual behavior in this system is complex [12]. Equations cannot model interactions in these systems [5], and experiments are expensive. One of the powerful tools in studying complex systems is the multi-agent-based model [31]. For instance, Cong and Wei [11] use this tool to compare some carbon auction methods. They consider the government as the auctioneer and two types of plants as the bidders. Tang et al. [28] design a carbon allowance auction market using the multi-agent-based model. They consider two agents: the government as the regulator of emission trading scheme and different firms in all parts of China. Yu et al. [21] propose a multi-agent-based model to simulate an emission trading scheme consisting of some firms in China. Our work is an extension of the search conducted by Cong and Wei [11]. Their study was limited to the carbon auction market, while the carbon trading market and carbon trading price can affect carbon auction and vice versa. They used the Roth-Erev reinforcement learning algorithm to adjust the bidding strategies. Their algorithm presented a bidding strategy based on the result obtained from the past, while studies show that if the agents can anticipate the long-term outcomes of their current bidding strategy rather than optimizing their immediate rewards, their profitability will improve [29]. As well as, in their study, each agent could generate a bid price close to zero because of not using reserve price. Therefore, the permits might have been sold below their value. Reserve price is the minimum price that the government expects to be paid for a permit unit in the carbon auction. It can guarantee that the permits are not purchased below their real value [9]. This paper is going to compare the uniform pricing method and discriminative pricing method using the multi-agent-based model in which each adaptive agent illustrates a firm that takes part in carbon auction in a cap-and-trade scheme and determines its bid based on Q-learning. Watkins proposed the Q-learning algorithm to solve the Markovian Decision Problems with incomplete information. This algorithm has some features that make it appropriate for repeated games against unknown opponents. First, it does not need a model from the environment. Second, it can be utilized on-line to discover the optimal strategy using the experience gained from interacting directly with the environment [29,32]. Third, an agent can predict the long-term outcomes of its actions and the actions of other agents, and therefore, it can be able to correctly model the other agents and achieve the optimal bidding strategy [29]. This algorithm has received increasing attention in the electricity auction market and has become a major tool in solving this problem [23,24,32,33]. As mentioned, the carbon permit auction market is a complex economic system due to a lack of data. In this system, each agent's behavior is strongly dependent on other agents' behavior and market conditions. Besides, each of the agents faces a lack of knowledge about their other competitors. Under such circumstances, building a model for the economic system is a complex problem, and using free-model algorithms such as the Q-learning algorithm can be very appropriate. The next section formulates the presented multi-agent-based model. Section 3 describes the Q-Learning algorithm and proposes agents' bidding strategies according to it. Section 4 presents the experimental results. Section 5 concludes the paper and offers direction for future research. Problem definition In this section, we present our multi-agent-based model to compare two pricing methods, namely the uniform pricing method and discriminative pricing method in a carbon auction market. We consider two kinds of agents, government, and firms. The government, as an auctioneer, allocates the initial carbon permits to the bidder and determines the actual price (the price that each firm should pay). Also, it regulates the carbon trading price to prevent market power and manipulating price. The firms, as bidders, simultaneously submit demand schedules (their prices and the quantities that they are willing to buy at those prices) to the auctioneer. The government determines the clearing price, the gained carbon permit from each firm, and the actual carbon price by forming the aggregate demand and supply curve. The government adds demand schedules to build the aggregate demand curve. The price in which the aggregate demand curve and supply curve intersect each other demonstrates the clearing price. The demands that are above the clearing price will be answered, the ones that are at the clearing price will be rationed, and the ones that are below the clearing price will be rejected. By comparing required carbon permits during production with carbon permits gained in the auction, each firm determines its supply or demand in the carbon trading market. Given the amount of supply and demand in this market, the government determines the carbon trading price. If required carbon permits exceed the carbon permits gained from two markets, the firm will be fined. Table 1 defines the main parameters used throughout this paper. Agents' decisions As mentioned in the previous section, first, the government as auctioneer determines the initial carbon permit T otalP ermit. Suppose the minimum carbon permit required for all firms to support their productions is T otalemission. To decrease the total emissions, the government implements the reduction policy and controls the initial carbon permit; therefore, it provides a percentage of T otalemission, i.e., φ. Consequently, T otalP ermit is calculated based on equation (2.1): T otalP ermit = T otalemission * φ. (2.1) The firms are the bidders in our model. Each firm i needs e i permits to produce one unit of product. Suppose production cost and selling price per unit product for firm i are c i and p i , respectively. Therefore, the value of a permit to firm i can be calculated as equation (2.2): In the equation above, v i is the private value (the maximum price that each firm tends to pay for a permit unit). In the carbon auction market, each firm gives a bid price bp i that is between reserve price and v i and a bid volume bv i (the required carbon permit to support its production). The government ranks firms in descending order in terms of their bid prices to form the aggregate demand curve. The equilibrium price is equal to the bid price generated by firm k that satisfies the following inequalities: T otalP ermit The initial carbon permit T otalemission The minimum carbon permit required for all firms to support their productions φ The percentage of carbon permit supply ei The required carbon permit for producing one unit product by firm i ci Production cost per unit product for firm i pi Selling price per unit product for firm i vi Private value for firm i bpi The bid price for firm i bvi Bid volume for firm i ep The equilibrium price in the carbon auction market gvi Carbon permit gained in auction for firm i cp Market clearing price api The price paid for each carbon permit unit by firm i rvi The total carbon permit required by firm i nvi Surplus or insufficient carbon permits of firm i P ermitSupply Total permit supply in carbon trading market P ermitDemand Total permit demand in carbon trading market nv i The actual carbon permit traded by firm i in carbon trading market ctp Carbon trading price Srevenue Sales revenue for firm i Ccosti The costs of carbon permits in the auction and carbon trading market for firm i P uni The penalty paid by firm i for exceeding its emissions from the gained permit Qi The product sales amount by firm i T emissioni Total emissions generated by firm i Gpermiti Total permits obtained by firm i in carbon auction and carbon trading market Algorithm parameters S = {s1, s2, . . . , sn} The set of the environment's state A = {a1, a2, . . . , am} The set of the agent's actions Q(s, a) Q value for each permissible pair (s, a) π * (s) The optimal policy P ss (a) Possibility of changing the environment' state from s to s by selecting action a R a ss The immediate reward of the agent for taking action a in state s and changing environment' state to s T = {0, 1, . . .} The steps that the algorithm repeat st The environment's state in step t at The agent' action in step t rt The immediate reward obtained by the agent in step t γ Discounted factor α Learning rate A small probability Therefore, the equilibrium price ep is equal to bp k , i.e., ep = bp k . The carbon permits gained of firm i (gv i ) in the carbon auction market for two pricing method is calculated according to the following equation: (2.5) In the uniform price method, every winner pays the market clearing price cp that is equal to the equilibrium price ep. Therefore, the actual carbon price ap i (the price that firm i should pay) in the uniform pricing method is calculated according to equation (2.6): In the discriminative price method, the winners pay their bidding price. So, the actual carbon price ap i in the discriminative pricing rule is calculated according to equation (2.7): The clearing price cp in the discriminative price method is calculated based on equation (2.8): After the firms obtain the initial carbon permits in the carbon auction market, they can trade their carbon permits in the carbon trading market. If carbon permits obtained for firm i, gv i , in the carbon auction is greater than its required carbon permits rv i , the firm has surplus carbon permits, i.e., nv i = rv i − gv i < 0. Therefore, the firm i can sell them in the carbon trading market. Similarly, if gv i is less than rv i , the firm has insufficient permits, i.e., nv i = rv i − gv i ≥ 0, so they should purchase carbon permits required in the carbon trading market. Total permit supply and total permit demand in carbon trading marked shown by P ermitSupply and P ermitDemand, respectively, are calculated according to the following equations when the total permit supply is greater than total permit demand, i.e., P ermitSupply ≥ P ermitDemand, the carbon trading price decreases and the actual permit traded by firm i, nv i is obtained by equation (2.11). Otherwise, the carbon trading price increases and nv i is calculated according to equation (2.12). (2.12) Determining carbon price in the carbon trading market The carbon trading price has a significant effect on the carbon auction. If the carbon trading price is high, the bidders might attempt to buy more permits in the auction to sell them in the carbon trading market at a higher price. If the carbon trading price is low, the competition between the bidders reduces, and consequently, the clearing price decreases. To maintain stability and avoid manipulation in the carbon market, the government considers the carbon trading price ctp around the market clearing price within a given range β ∼ U (0, 0.3). Agents' bidding strategy In this section, we present a profitable bidding strategy based on the Q-learning algorithm. First, we propose the Q-learning algorithm and then describe the proposed bidding strategy. Q-learning algorithm Suppose an agent interacts with its environment at discrete time steps, t = 0, 1, 2, . . ., and S = {s 1 , s 2 , . . . , s n } is a finite set of states that the environment can adopt, and A = {a 1 , a 2 , . . . , a m } is a finite set of actions that the agent can select. At each time step t, the agent observes the present state of the environment s t = s ∈ S and then takes an action a t = a ∈ A. Consequently the agent achieves an immediate reward r t+1 and the environment alters its states to new states s t+1 = s ∈ S according to the transition probability p ss (a). In the Q-Learning algorithm, there is a lookup table containing Q value for each permissible pair (s, a). First, the table is initialized either randomly or according to the agent's knowledge. The purpose of the agent is to find the optimal policy π * (s) ∈ A to maximize Q value of each state over the long run by using the Bellman optimality: In equation (3.1), γ (0 ≤ γ ≤ 1) is the discounted factor and determines how important the future rewards are for the agents. R a ss is an immediate reward that the agent receives because of taking action a in state s and changing the environment' state from s to s . Without knowing p ss (a), the Q-learning algorithm is capable of finding the optimal policy for each state by online estimation of Q (s, a) in a recursive method and by utilizing data: s t , a t , s t+1 and r t+1 . The updating equation is as follows: where α (0 < α < 1) is the learning rate and represents the degree of new data impact on the update of estimated Q values. In other word, α represent how much the agents consider the recent information to explore possibilities. Based on the theorem of the QL convergence, the Q-value converges to the optimal value, if each (s, a) is visited infinitely and α decreases appropriately. In a single agent, the environment is stationary and Markovian, but our problem is a multi-agent case that an agent's optimal policy depends on other agents' strategy. Therefore, each agent provides a non-stationary and non-Markovian environment for the other agents, and these conditions do not guarantee convergence to the optimal policy. Some progress has been made in this area [17,18,26], but they are for some special cases and seem inappropriate for practical problems. Despite these, among all the reinforcement learning algorithms, Q-learning is applied in many studies mainly because of its simplicity. Q-learning based bidding strategy In auctioning carbon permits, firms try to determine their bid prices and bid volumes to maximize their profits. Based on the Q-learning algorithm, every firm learns from its bidding experience during runs to bid profitably. To determine the profitable bidding strategy for firms using the Q-learning algorithm, the states of the environment, actions, and rewards must first be specified. The market clearing price is considered as the state of the environment. That changes between the reserve price value and the maximum price that the firms can afford to pay per unit of permit. To prevent "the curse of dimensionality" the state space is equally discretized into 16 states. Deciding on bidding price and volume is the action of each agent. The bidding price for firm i is between the reserve price and v i , and bidding volume is between the minimum permits that the firm requires for supporting its productions and the maximum permits that the firm is allowed to bid. The price space and volume space are equally discretized into 6 prices and 16 volumes, respectively. Therefore, the action space is 6 * 16. The reward of firm i participating in carbon permit auction is its benefit function that is calculated as follows: where Srevenue i = (p i − c i ) * Q i is sales revenue (where c i , p i and Q i present production cost and selling price per unit product and the sales amount of firm i, respectively). Ccost i displays the costs of carbon permits in the auction and carbon trading market and is calculated according to equation (3.6): P un i presents the penalty paid by firm i. If total emissions generated by firm i (T emission i ) exceed the total permits obtained from the carbon auction and carbon trading market (Gpermit i = gv i + nv i ), it should pay the penalty for its non-compliance emissions. Algorithm implementation According to Figure 1, the steps of firms' learning and bidding are given as follows: (1) Initialization: first, the input parameters of the algorithm are initialized. Small random numbers or 0 are assigned to all state-action combinations for each firm. Suppose, Maxiter is the maximum iteration intended to run the algorithm. It is a termination condition; therefore, the following steps are repeated until the termination condition is reached. (2) State identification: in each iteration, the agents utilize the market clearing price on the previous step as the current state. In the first iteration, the reserve price (the lowest possible value) is considered the environment's state. (3) Action selection: after identifying the environment's state, the agents select their actions (bidding decisions) according to Q-values saved in Q-value lookup tables for each state-action pair. To choose action, the agents utilize the -greedy method to balance exploitation and exploration. According to this method, the agents select the action with maximum Q-value in the state s with high probability 1 − and a random action from all admissible actions with a small probability . (4) Q-value update: after declaring the market clearing price and carbon permit allocated to each firm by the government, the firms calculate their rewards according to equation (3.5) and update the Q-values according to the current rewards and next state, which is the market clearing price of the current iteration using equation ( Experimental analysis In this section, based on the multi-agent Q-learning, the uniform price auction and discriminative auction are compared. Therefore, the algorithm parameters, data, and the results of the simulation are presented in the next two sub-sections. Parameter setting To compare two auction formats, it is supposed that five adaptive agents compete in the auction carbon market and trade their permits in the carbon trading market and explore their bidding strategy. The parameters of agents are randomly generated from the interval shown in Table 2. As can be from Table 2, the lower and upper bounds of each parameter are close to each other; therefore, there is a perfectly competitive market, and no agent possesses the market power. The Q-learning parameters α, γ and is considered 0.9, 0.1 and 0.5, respectively. These values have an essential role in exploring and convergence of the Q-learning algorithm, Therefore, the related values are selected based on the previous studies conducted in this context, like Sadr et al. [25] and Poursalimi Jaghargh and Mashhadi [22], in a way that a balance between exploration and exploitation is obtained. To ensure that permits are not sold below their value, we set the reserve price at 0.5. Experiment results In this section, we compare the uniform price and discriminative price method in terms of bidding price, bidding volume, firms' profit, the government revenue, the clearing price, the carbon trading price, the total amount of permits traded in the carbon trading market, the firm's benefit reduction in the carbon auction relative to the free method and emission reduction. Finally, we compared the performance of the Q-Learning algorithm with the Roth-Erev algorithm that was used in Cong and Wei [11], in terms of firms' benefit. The algorithm has coded in C++ and complied with Microsoft Visual Studio 2012. The φ value can influence the results; therefore, we compare these two price methods for different φ values. To investigate our results for low to high φ values, we change it from 80% to 100%. According to equation (2.1), when the value of φ is small, the carbon supply quantity is small and vice versa. Figures 2 and 3 give the equilibrium price and bid prices of all agents under the uniform and discriminative pricing rules, respectively. As we can see from the two figures, the agents bid prices are much higher than the equilibrium price in the uniform price auction, while, in the discriminative price auction, they bid prices as close to equilibrium price as possible. That is because, in the discriminative price, every winner pays its bid. Hence, the bidders tempt to predict the equilibrium price and bid close to it, but in the uniform price, since every winner pays the equilibrium price, therefore, forecasting the equilibrium is less important [13]. This fact makes the bid prices in the uniform auction have a higher dispersion [2,3,30]. For example, in our experiment, the average variances for bid prices in the uniform and discriminative price are 0.423 and 0.05 price unit, respectively. As shown in the two figures, the bid price of agent 2 is almost equal to the equilibrium price for all values of φ. Impact of auction form on bidding price All firms bid below their private values in both methods. It can be seen in Figure 4. Note with the increase in the value of φ, the bid prices decrease. This is because φ affects supply quantity. When the supply quantity is low, the competition between firms increases; therefore, they bid higher prices and close to their private values. But, when the supply quantity increases, the competition decreases, so the firms bid as close to the reserve price as possible. As you can see, the bid prices are more reduced in the discriminative price method, and they are almost equal to 0.5 (reserve price) for φ = 90% to 100% for all firms. As previously mentioned, the bid price of firm 2 is equivalent to equilibrium price for φ = 80% to 98%, because this firm wins some of its bid; therefore, it is not economically feasible that it bids a higher price. From φ = 98% to 100%, firm 2 increases its price in the uniform price because other firms decrease their bid prices, and it allows firm 2 to increase its bid price and win more than before. Impact of auction form on bidding volume As shown in Figure 5, the bid volumes fluctuate around a fixed value in the uniform price method. Because, in this method, the firms try to influence the equilibrium price and keep it down. While with the increase in φ, bids volume increase in the discriminative pricing method. There are two reasons for this: first, in this method, firms pay their bids and, second, with the increase in φ, according to Figure 4, the bid prices decrease so the firms can increase their bid volume to raise their benefits. Therefore, we can say for the high value of φ, bid shading in the discriminative price is less than the uniform price method. Impact of auction form on firms' profit The benefit of firms is influenced by φ as shown in Figure 6. For φ less than 90%, the benefit of firms in the uniform pricing is more than the discriminative pricing method. This is because the firms pay the equilibrium price in the uniform auction. But for φ greater than 90%, the benefit of firms in the discriminative pricing is equal to or greater than the benefit of firms in the uniform pricing. Because, as shown in Figure 4, the bid prices are too low and almost equal to the equilibrium price for φ greater than 90%, therefore the benefit of firms increase. It can be concluded that when supply is scarce, the uniform price auction is more beneficial to bidders than discriminative price and vice versa. Figure 7 shows, when the carbon permit is scarce the government's revenue in the discriminative price method is greater than that in the other method. Therefore when the carbon permit is large, the government should utilize the uniform price method, and when the carbon permit is scarce, the government should use the discriminative price method [11]. Figure 8 shows that the supply quantity can influence the clearing price. In the uniform price method, the clearing price is almost stable. But when the value of φ is less than 90%, the clearing price in the discriminative price is greater than that in the uniform price method, and for φ larger than 90%, it is less in the discriminative price method. Because with the increase in the value of φ in the discriminative price, the bid prices decrease and fall near the reserve price, so the clearing price reduces. In the uniform price method, the clearing price is equal to the equilibrium price, and as you can see in Figure 2, the equilibrium price is almost stable. Impact of auction form on the carbon trading price As mentioned before, the government determines the carbon trading price using the clearing price. Therefore, the relationship between the carbon trading price and supply quantity in the two pricing methods will be similar to the relationship between the clearing price and supply quantity. You can see it in Figure 9. Impact of auction form on the carbon permit traded in the carbon trading market The volume of carbon permits traded in the carbon trading market is different for the two pricing methods. As is evident from Figure 10, with the increase in supply quantity, the volume of trading increases in the discriminative price, while it is almost stable in the uniform price method. As before mentioned, the bidders in the uniform price keep their volume of bids as low as possible to influence the equilibrium price, but in the discriminative price, with the increase in supply quantity, the bidders increase their volume of bids to resell them at secondary price and raise their revenues. Impact of auction form on the auction efficiency In this section, we consider the auction efficiency as a performance measure. It can be calculated as follows [12]: In the above equation, EOG i is the actual earning of the government from auction and penalties. TBF i shows the total benefit of the firms. In two parameters, i represents the type of auction. As shown in Figure 11, we definitely can not say which method is more efficient than the other. It depends on the supply quantity. When the supply quantity is low, the discriminative pricing method is more efficient than the uniform pricing method, but in case that the supply quantity is high, the uniform pricing method is more efficient than the other. Impact of auction form on firms' benefit and emission reduction In this section, our analysis focuses on the comparison of firms' benefit reduction in the auction compared to the situation that initial permit is allocated by free methods and emission reduction for the two pricing methods. As shown in Figure 12, the values of firms' benefit reduction are positive for two methods; in other words, the carbon auction causes the firms' benefit to decrease. When φ is less than 90%, the amount of benefit reduction in the discriminative price is more significant than the uniform price and vice versa. This is because, in the uniform price, the firms earn more benefit than the other method for φ less than 90% (see Fig. 6). As is evident from Figure 12, lowering the value of φ results in greater benefit reduction and emission reduction. The best environmental performance is obtained at φ = 80%, i.e., when emission reduction is 20%. At this point, the most significant reduction in firms' benefit occurs. The smallest values of firms' benefit reduction in the uniform and discriminative price methods occur at φ = 96% and φ = 100%, respectively. In other words, at these points, the best economic performances are obtained for two pricing methods. The results of this section are appropriate for policymakers to determine the right value φ to balance between economic and environmental goals. The comparison of Q-learning algorithm with Roth-Erev algorithm As mentioned in Section 1, the Roth-Erev algorithm used by Cong and Wei [11] utilizes only past information and determines the bidding strategy. While if the agents can predict the long-term results of their current decisions, their profitability will increase. As said, the Q-learning algorithm can tackle this problem. In this section, we examined this claim and test the Roth-Erev algorithm and compare the results with the ones obtained from the Q-learning algorithm under the uniform and discriminative price for all values of φ. As shown in Figure 13, the average firms' profit in the Q-learning algorithm outperforms the one in the Roth-Erev algorithm for all φ values and two pricing methods. Conclusions In this paper, we present a multi-agent-based model to compare the uniform price and the discriminative price methods in the carbon auction. Agents represent the firms participating in the auction, and they can develop their bids using the Q-learning algorithm. Our findings show that the agents' bidding behaviors are different in the two approaches. We cannot say which method is more advantageous, but the results are remarkably close to the theoretical predictions. The main of our conclusions are as follows: Figure 13. Comparison of average firms profits for two algorithms in the two pricing methods. (1) The bid prices in the uniform price method have higher dispersion, but they are as close to the equilibrium price as possible in the discriminative price method. Our results also show that bid prices in the uniform price method are almost greater than that in the other method. (2) In the uniform price method, the bidders bid below their maximum volume to decrease the equilibrium price, but in the discriminative price method, with the increase in supply quantity, the bidders bid their maximum volume. (3) When the supply quantity is low, the government gets more revenue under the discriminative price method, and the firms earn more revenue in the uniform price method, but when the supply quantity is large, the advisable method for the government is the uniform price method and is the other method for the firms. (4) For low supply quantity, the clearing price and consequently the carbon trading price in the discriminative price are much more than those in the uniform price method, but when supply quantity is abundant, the clearing price and carbon trading price are almost equal in two methods. (5) The amount of carbon permits traded in the carbon trading market in two pricing methods is different from each other. Because of bid shading, it is more stable in the uniform price than the discriminative price method. With the increase in supply quantity, the tradable carbon permits increase in the discriminative price method. Therefore, when the supply quantity is scarce in the auction, the tradable carbon permits are low for two methods, but when the supply quantity increases, they increase in the discriminative price method and are almost without changing in the other method. (6) The auction efficiency differs in two pricing methods for different supply quantities. When the supply quantity is low (high), the discriminative price method (the uniform price method) is more efficient than the uniform price method (the discriminative price method). (7) The amounts of emission reduction and firms' benefit reduction relative to the situation that carbon permit is allocated for free depend on carbon supply quantity. A decrease in carbon supply quantity results in a further increase in these values. When the carbon supply quantity is low, the firms' benefit reduction in the discriminative price method is greater than the uniform price method and vice versa. Therefore, if the government wants to decrease the emissions as much as possible and does not want to harm firms economically, the carbon permits should be auctioned in the uniform price method. Finally, we compare the Q-learning algorithm with the Roth-Erev algorithm. The results demonstrate our method outperforms the Roth-Erev algorithm, and the agents obtain more benefits by using this method. In summary, the performance of the two pricing methods depends on the amount of carbon permits allocated by the government in the auction market. Incorporating abatement activities and their costs into the presented model can be investigated for future research.
2021-08-20T18:41:53.356Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "1b8dcec42447c9141d4091f1f45bef8f2b210c46", "oa_license": "CCBY", "oa_url": "https://www.rairo-ro.org/articles/ro/pdf/2021/04/ro200367.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f60768ab7adfc4c573aef1903f58e736ed541b3a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Computer Science" ] }
226959367
pes2o/s2orc
v3-fos-license
An lp17-encoded small non-coding RNA with a potential regulatory role in mammalian host adaptation by the Lyme disease spirochete The bacterial agent of Lyme disease, Borrelia burgdorferi, relies on an intricate gene regulatory network to transit between the disparate Ixodes tick vector and mammalian host environments. We recently reported that a B. burgdorferi mutant lacking an intergenic region of lp17 displayed attenuated murine tissue colonization and pathogenesis due to altered antigen expression. In this study, a more detailed characterization of the putative regulatory factor encoded by the region was pursued through genetic complementation of the mutant with variants of the intergenic sequence. In cis complemented strains featuring mutations aimed at eliminating potential BBD07 protein translation were capable of full tissue colonization, suggesting that the region encodes an sRNA. In trans complementation resulted in elevated transcription levels and was found to completely abolish infectivity in both immunocompetent and immunodeficient mice. Quantitative analysis of transcription of the putative sRNA by wild type B. burgdorferi showed it to be highly induced during murine infection. Lastly, targeted deletion of this region resulted in significant changes to the transcriptome, including genes with potential roles in transmission and host adaptation. The findings reported herein strongly suggest that this lp17 intergenic region encodes for an sRNA with a critical role in the gene regulation required for adaptation and persistence of the pathogen in the mammalian host. Author Summary Lyme disease continues to emerge as a devastating infection that afflicts hundreds of thousands of people annually in the United States and abroad, highlighting the need for new approaches and targets for intervention. Successful development of these therapies relies heavily on an improved understanding of the biology of the causative agent, Borrelia burgdorferi. This is particularly true for the critical points in the life cycle of the pathogen where it must transition between ticks and mammals. Variation in the levels of bacterial gene expression is the lynchpin of this transition and is known to be driven partly by the activity of regulatory molecules known as small non-coding RNAs (sRNAs). In this work, we characterize one of these sRNAs by providing experimental evidence that the transcribed product does not code for a protein, by testing the effects of its overproduction on infectivity, and by interrogating whether its activity causes changes in expression levels of genes at the level of transcription. The findings of this study provide further evidence that regulatory sRNA activity is critical for transmission and optimal infectivity of B. burgdorferi and contribute to the recently growing effort to attribute specific roles to these important molecules in the context of Lyme disease. Introduction 115 transcript from this same region was also detected in a pair of earlier studies investigating 116 Rrp2-, RpoN-, and RpoS-dependent genes, where its expression was shown to be highly 117 dependent on an intact alternative sigma factor pathway (38,39). 118 We recently demonstrated that an lp17 left-end deletion mutant lacking this region 119 displays attenuated murine tissue colonization and pathogenicity, which was ultimately 120 attributed to a 317 bp intergenic region encompassing the bbd07/ SR0726 locus (40). The 121 tissue colonization defect was not observed during infection of SCID mice, indicating a 122 potential role for this locus in avoidance of adaptive immunity. Further study provided 123 evidence that deletion of this region results in dysregulated antigen expression, 124 implicating the gene product as a regulatory factor. 125 In the current study, we aimed to test the hypothesis that the bbd07/SR0726 region 126 of lp17 encodes an intergenic regulatory sRNA that participates in the transcriptome and 127 proteome shift required for spirochetes to adapt to the mammalian host environment. To 128 do this, BBD07 protein production was first ruled out experimentally through non-native 129 bbd07/SR0726 complementation in the previously described lp17 left-end mutant (40). 130 Then, the effects of bbd07/SR0726 overexpression were studied in the context of murine 131 infection and antigen expression. These tests revealed that spirochetes expressing high 132 levels of sRNA bbd07/SR0726 cannot infect mice, and that this phenotype may be 133 independent of altered in vitro antigen expression. Next, the expression of this sRNA was 134 quantified under in vivo conditions, which confirmed its putative involvement in the 135 mammalian portion of the enzootic cycle. Finally, a targeted knockout clone of 136 bbd07/SR0726 was generated and used for RNA-seq analysis, which revealed that its 137 activity has effects on transcript levels in addition to the previously observed effects on 138 the in vivo-expressed antigenic proteome. This work represents a significant step forward 139 towards understanding the critical role that this sRNA has in host adaptation by the Lyme 140 disease pathogen. Results 143 Mutations that disrupt potential BBD07 protein production do not perturb the 144 functionality of the gene product during murine infection. 145 The recent detection of a putative sRNA encoded within the intergenic space of 146 lp17 containing the discontinued bbd07 ORF annotation (NC_001849.1 [discontinued]), 147 coupled with numerous failed attempts to detect a BBD07 protein product, provided a 148 strong indication that the bbd07/SR0726 locus encodes an sRNA. However, more 149 conclusive results were needed to definitively identify the gene product as an sRNA. 150 To further rule out the possibility of BBD07 protein production, two in cis 151 complement strains were generated in an lp17 mutant strain lacking a region containing 152 bbd01-bbd07 that is incapable of murine heart tissue colonization (∆1-7; (40)). These two 153 genetically complemented strains harbored a bbd07/SR0726 copy containing either a 154 premature stop codon or a disrupted start codon, and were denoted Comp7Nstop c and 155 Comp7NΔstart c , respectively ( Figure 1B. 180 Sequencing was performed to verify the single base pair mutations in each construct, and 181 isogeneity to the parent clone was confirmed by multiplex PCR for native plasmid content 182 (data not shown) (43). 183 To determine if the altered bbd07/SR726 sequences were able to rescue the 184 mutant phenotype, the capacity for heart tissue colonization of immunocompetent mice 185 was selected for use as a readout. We reasoned that this was a reliable indicator of gene 186 functionality due to the fact that the ∆1-7 mutant has been previously shown to be unable 187 to colonize heart tissue in immunocompetent mice (40). Groups of five C3H mice each 198 202 215 Overexpression of ittB is deleterious to murine host infection. 216 Our laboratory recently reported that absence of the region encoding ittB affects 217 antigen expression in vivo, which could potentially explain the observed attenuation in 218 tissue colonization and pathogenesis (40). Thus, it was hypothesized that ittB 219 overexpression may lead to altered murine infectivity due to abnormal antigen production. 220 B. burgdorferi has been shown to maintain the pBSV2G shuttle vector at a higher copy 221 number than its native plasmids, and expression levels of genes complemented on this 222 vector can be elevated compared to wild type (42). Thus, in trans complemented strains 223 that harbored ittB gene copies on pBSV2G were generated in the ∆1-7 mutant 224 background to assess the effects of ittB overexpression. Complementation was 225 performed by transforming ∆1-7 cells with pBSV2G carrying either a wild-type copy of ittB 226 (Comp7N t ) or a copy containing an early stop codon (Comp7Nstop t ). An empty vector 227 control strain harboring pBSV2G without an ittB gene copy was also generated (CompE t ). 228 PCR verification of ittB presence or absence in these strains is illustrated in Figure 2A, 229 and isogeneity to the parent strain was confirmed via multiplex PCR (data not shown). 230 To determine the relative in vitro expression levels of ittB in the wild type and 231 Comp7N t , qRT-PCR analysis was performed using cDNA derived from triplicate cultures 232 of wild type, ∆1-7, and Comp7N t grown to late log-phase (1x10 8 spirochetes ml -1 ). As 233 shown in Figure 2B, a ~24-fold increase in ittB expression by Comp7N t spirochetes 234 compared to the wild type was observed. Wild-type expression of ittB was found to be 235 low, producing less than 2 ittB copies per 10 4 flaB copies. These results indicate that ittB 236 may be tightly regulated, and that residence of the ittB sequence on a high-copy vector 237 may disrupt this regulation that results in elevated in vitro transcription. To assess the effects of high copy ittB expression on murine infection, groups of 239 five C3H mice each were needle inoculated (5x10 3 total spirochetes) with wild type, 279 is highly elevated during in vitro cultivation (Fig. 2B), we predicted that any potential 280 resultant protein expression changes would be observable under the same conditions. 281 To assess the effects of ittB overexpression on overall antigen production, cultures 282 of wild type, ∆1-7, Comp7N c , Comp7N t , and CompE t spirochetes were grown in triplicate 283 under the same conditions used to quantify ittB transcript levels in the Comp7N t strain. 284 Protein lysates (10 9 total cells) of each strain were subjected to Western blot analysis 285 using murine immune sera harvested from mice that had been infected with wild type B. 286 burgdorferi for 28 days. Surprisingly, no notable differences could be observed between 287 the in vitro antigenic profiles of the strains tested (Fig. 3). This result is suggestive that 288 the non-infectious phenotype exhibited by Comp7N t may not be due to dysregulated 289 antigen expression. 303 bladder, or joints, respectively (Fig. 4). These values range from a 100-to 400-fold 304 increase in ittB transcription levels relative to that observed for in vitro grown wild-type 305 spirochetes (~2 copies of ittB per 10 4 copies of flab, see Fig. 2B), supporting the 306 hypothesis that ittB transcription is upregulated during mammalian infection. Despite ittB 307 transcription being ~100-fold higher in heart-resident spirochetes than those grown in 308 vitro, ittB expression levels in these spirochetes were significantly lower than those 309 colonizing joint tissue (p≤0.05). Differences in ittB transcription either between heart and 310 bladder or bladder and joint did not reach significance. 338 infection, heart, bladder, joint, and ear tissue were harvested and cultured for detection 339 of spirochete growth by dark field microscopy as described before. As was previously 340 observed for Δ1-7, none of the heart tissue cultures from ΔittB-infected mice were positive 341 for spirochete growth, further demonstrating that ittB is required for heart tissue 342 colonization in mice (Fig. 6) In this study, we attempted to further characterize the bbd07/SR0726 locus of lp17 386 by determining whether the functional product encoded by this intergenic region is an 387 RNA or protein, and assessing any potential regulatory effects resulting from its targeted 552 PBS. Lysis was performed by adding 30μL of 4X Laemmli loading dye containing 10% β-553 mercaptoethanol, and then heating the samples to 80°C for 10 minutes following at least 554 one freeze-thaw cycle. Lysates (10 9 cells) were electrophoresed on a precast 4-15% 555 polyacrylamide gradient gel (Bio-Rad). Western blotting was performed as described 556 previously (52), using sera harvested from C3H mice infected with 5x10 3 spirochetes of 557 the indicated strain for 28 days. Purified polyclonal antibodies (Rockland) were used in 558 anti-FlaB Western blots. 560 Cultures were grown in triplicate under standard culture conditions to mid-log 561 phase (5x10 7 spirochetes mL -1 ), then temperature shifted by 1:100 dilution into room 562 temperature BSK-II. Cultures were allowed to grow to mid log phase, at which point they 563 were subcultured again 1:100 into warm BSK-II. These final cultures were then allowed 564 to grow under standard conditions to late log-phase (1x10 8 spirochetes mL -1 ). Cultures 565 were pooled, and RNA was extracted using a hot phenol method described previously Fisher's exact test was used to determine significant differences in the ability to 577 recover recombinant strains by culturing of tissues compared with wild 578 type B. burgdorferi. Student's t-test was performed to determine significant differences in 579 spirochete burden in heart tissue samples from qPCR analyses, where the average 580 burden in heart tissues from mice infected with a given recombinant strain was compared 581 to that from mice infected with the wild type. Student's t-test was also used to determine 582 significant differences in the average in vitro ittB transcription levels between a given 583 recombinant strain and the wild type by qRT-PCR. One-way ANOVA followed by all 584 pairwise multiple comparison (Holm-Sidak) was used to determine significantly different 585 levels of average in vivo ittB transcription by the wild type between each tissue tested.
2020-11-12T09:08:48.387Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "6f4eb53e2612b199fa4b1b46314d3782ff05016e", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/11/06/2020.11.06.371013.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "cc1304e9d761f5079300b102f528b7ed7821db8c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
15696624
pes2o/s2orc
v3-fos-license
Identification of a Novel Topoisomerase Inhibitor Effective in Cells Overexpressing Drug Efflux Transporters Background Natural product structures have high chemical diversity and are attractive as lead structures for discovery of new drugs. One of the disease areas where natural products are most frequently used as therapeutics is oncology. Method and Findings A library of natural products (NCI Natural Product set) was screened for compounds that induce apoptosis of HCT116 colon carcinoma cells using an assay that measures an endogenous caspase-cleavage product. One of the apoptosis-inducing compounds identified in the screen was thaspine (taspine), an alkaloid from the South American tree Croton lechleri. The cortex of this tree is used for medicinal purposes by tribes in the Amazonas basin. Thaspine was found to induce conformational activation of the pro-apoptotic proteins Bak and Bax, mitochondrial cytochrome c release and mitochondrial membrane permeabilization in HCT116 cells. Analysis of the gene expression signature of thaspine-treated cells suggested that thaspine is a topoisomerase inhibitor. Inhibition of both topoisomerase I and II was observed using in vitro assays, and thaspine was found to have a reduced cytotoxic effect on a cell line with a mutated topoisomerase II enzyme. Interestingly, in contrast to the topoisomerase II inhibitors doxorubicin, etoposide and mitoxantrone, thaspine was cytotoxic to cell lines overexpressing the PgP or MRP drug efflux transporters. We finally show that thaspine induces wide-spread apoptosis in colon carcinoma multicellular spheroids and that apoptosis is induced in two xenograft mouse models in vivo. Conclusions The alkaloid thaspine from the cortex of Croton lechleri is a dual topoisomerase inhibitor effective in cells overexpressing drug efflux transporters and induces wide-spread apoptosis in multicellular spheroids. Introduction The concept of developing target-specific drugs for treatment of cancer has not been as successful as initially envisioned [1,2]. The success rate of oncology drugs from first-in-man to registration during 1991-2000 was only around 5% for 10 major pharma companies [2]. A major causes of attrition in the clinic is lack of drug efficacy [2]. This realization has lead to a renewed interest in the use of bioassays for drug development in the field of oncology. One attractive screening endpoint is apoptosis since this form of cell death is induced by many clinically used (and effective) anticancer agents [3]. Natural products have been used as source of novel therapeutics for many years. Natural products have been selected during evolution to interact with biological targets and their high degree of chemical diversity make them attractive as lead structures for discovery of new drugs [4]. A number of plant-derived anticancer drugs have received FDA approval for marketing: taxol, vinblastine, vincristine, topotecan, irinotecan, etoposide and teniposide [5]. Antibiotics from Streptomyces species, including bleomycins, dactinomycin, mitomycin, and the anthracyclines daunomycin and doxorubicin are important anticancer agents [6]. More recently developed anticancer agents such as the Hsp90 inhibitor geldanamycin was also isolated from Streptomyces [7]. Marine organisms have also been used as source for the search of anticancer agents. Interesting compounds, including bryostatin (from the marine bryozan Bugula neritina), ecteinascidin (an alkaloid from the Carribian tunicate, Ecteinascidia turbinata) and dolastatin (from the sea hare), have been identified [8]. Although being the source of lead compounds for the majority of anticancer drugs approved by the Food and Drug Administration, natural products have largely been excluded from modern screening programs. We here used a high-throughput method for apoptosis detection [9] to screen a library of natural compounds using a human colon carcinoma cell line as screening target. One of the most interesting hits in this screen was thaspine, an alkaloid from the cortex of the South American tree Croton lechleri. We show that thaspine is a topoisomerase inhibitor which is active on cells overexpressing drug efflux transporters. Screening for natural products that induce apoptosis of colon carcinoma cells We used HCT116 colon carcinoma cells as target cells to screen for apoptosis-inducing agents present in NCI Natural Product Set (www.dtp.nci.nih.gov). Apoptosis was determined using a modification of the M30-ApoptosenseH method [9] which specifically measures caspase-cleaved cytokeratin 18 formed in apoptotic cells. Activity in this assay is inhibited by the pan-caspase inhibitor zVAD-fmk [9]. The M30-ApoptosenseH method is a useful screening tool since it measures the accumulation of the apoptotic product in cell cultures, leading to an integrative determination of apoptosis to the point of harvesting the cells. Using a compound concentration of 25 mM and an exposure time of 24 hours, 20 compounds were identified as inducing apoptosis above a preselected threshold value (Table 1). Molecular targets have been reported on 14 of these 20 compounds (Table 1). The alkaloid thaspine (taspine; NSC76022) was one of the remaining 6 compounds with unknown mechanism of action ( Figure 1A). Thaspine is of interest since it is an alkaloid from Dragon's blood, a latex prepared from the cortex of the tree Croton lechleri and used by tribes in the Amazonas basin for medicinal purposes. Thaspine induced strong caspase-cleavage of cytokeratin-18 in HCT116 cells at a concentration of ,10 mM (Fig. 1B). This concentration requirement is similar to that of other cancer therapeutic drugs such as cisplatin (,20 mM), doxorubicin (,3 mM) and mechlorethamine (,20 mM) for induction of caspase activity of this cell line (Fig. 1B). Thaspine was also found to induce activation of caspase-3 at 10 and 16 hours (see below). Thaspine induces apoptosis in vivo Thaspine has previously been described to have anti-tumor activity in the mouse S180 sarcoma model [10]. To examine whether in vivo anti-tumor activity is associated with induction of apoptosis, SCID mice carrying HCT116 xenografts were treated with thaspine and tumor sections were stained with an antibody to active caspase-3. Positivity was observed in tumor tissue at 48 hours after treatment with 10 mg/kg thaspine (maximally tolerated dose) ( Fig. 2A, top). We also utilized caspase-cleaved CK18 as a plasma biomarker for tumor apoptosis [11,12]. When applied to human xenografts transplanted to mice, this method allows determination of tumor apoptosis independently of host toxicity (the antibodies used in the ELISA assay are species-specific and do not detect mouse caspase-cleaved CK18 [13]). We examined two different xenograft models using this assay, the HCT116 colon carcinoma used for screening and the FaDu headneck carcinoma model. In order to mimic a clinical trial situation of advanced disease, tumors were allowed to grow to a size of ,400 mm 3 and then treated with a single injection of thaspine. Increases in CK18-Asp396 were observed 48 hours after injection of thaspine in both models ( Figure 2B, C). Apoptosis was paralleled by a significant, but transient, reduction of tumor size in the FaDu model ( Figure 2D). We conclude that thaspine is capable of inducing tumor apoptosis in vivo. Thaspine induces the mitochondrial apoptosis pathway Most forms of cancer therapeutics induce the mitochondrial pathway of apoptosis [3]. This pathway is associated with opening of the mitochondrial permeability transition pore [14]. We examined whether thaspine induced a decrease in HCT116 mitochondrial membrane potential (Dy M ) using the fluorescent probe tetramethyl-rhodamine ethyl ester (TMRE). Mitochondria in thaspine-treated cells underwent a shift to lower Dy M values ( Figure 3A). A hallmark of the mitochondrial apoptosis pathway is release of cytochrome c from mitochondria to the cytosol. Thaspine was found to induce a decrease in the levels of mitochondrial cytochrome c and an increase of the levels in the cytosol ( Figure 3B). The Bcl-2 family proteins Bak and Bax are key regulators of the mitochondrial apoptosis pathway [15]. During apoptosis, the conformation of these proteins is altered. Experiments using conformation-specific antibodies showed that thaspine induce conformational activation of both Bak and Bax ( Figure 3C). BH3-only proteins antagonize the pro-survival function of Bcl-2 proteins [16] or may activate pro-apoptotic Bak/Bax [17]. We used an siRNA approach to examine whether any particular BH3only proteins were required for thapsin-induced apoptosis. Transfection with 9 different siRNA pools showed that Bid and Bik siRNA significantly reduced thaspin-induced cytokeratin 18 caspase-cleavage in HCT116 cells (Fig. 3D), suggesting that these proteins are regulators of apoptosis elicited by this compound. Thaspine is a topoisomerase inhibito The molecular target of thaspine is not known. To generate hypotheses regarding the mechanism of action, we used the Connectivity Map (CMAP) [18], which is a large compendium of gene expression signatures from drug-treated cell lines. We analyzed the drug-induced gene expression changes compared to vehicle control in the breast cancer cell line MCF-7, since all of the 1,309 compounds in CMAP have been tested on this cell line. The gene expression response to thaspine was similar to that of ellipticine, a known topoisomerase II inhibitor, and similar to the response to the topoisomerase I inhibitor camptothecin ( Figure 4A). Topoisomerase inhibition was tested using in vitro enzyme assays. The results showed that thaspine inhibits both topoisomerase I and II activity at the apoptotic concentration (10 mM) ( Figure 4B, C). Furthermore, thaspine was found to have a reduced cytotoxic effect on the viability on CEM/VM-1, a cell line selected for resistance to the topoisomerase II inhibitor teniposide compared to the parental cell line CCRF-CEM (Table 2). CEM/ VM-1 harbors a mutated topoisomerase II gene which mediates a specific resistance to topoisomerase II inhibitors, but not general multidrug resistance [19,20]. CCRF-CEM are not resistant to camptothecin [21]. The resistance to thaspine was not as pronounced as seen for etoposide, known to be a non-intercalating topoisomerase II inhibitor, but well in line with the intercalating topoisomerase inhibitors doxorubicin and mitoxantrone. These data further suggest that thaspine is a topoisomerase inhibitor. Thaspine induced an accumulation of HCT116 cells in the S and G2/M phases of the cell cycle (Table 3). For comparison, the topoisomerase II inhibitor etoposide induced G2/M accumulation, whereas camptothecin induced some S-phase arrest (Table 3). Thaspine cytotoxicity is only weakly affected by PgP or MRP overexpression Anticancer drugs are often substrates of membrane-associated drug efflux transporters. The topoisomerase II inhibitor doxorubicin is a substrate of P-glycoprotein (Pgp, ABCB1) and etoposide is a substrate of the multidrug resistance-associated protein (MRP, ABCC1). We found that two cell lines which are strongly resistant to etoposide and/or doxorubicin due to PgP or MRP overexpressio [22] were ,2-fold resistant to thaspine compared to parental cells (Table 2). Thaspine induces apoptosis in multicellular spheroids Multicellular spheroids (MCS) are known to better mimic human solid tumor tissue than 2-D monolayer cultures. Many clinically used anticancer drugs show limited potency on MCS, a phenomenon believed to reflect their limited activity on solid tumors [23,24]. To investigate whether thaspine induces apoptosis of MCS, spheroids were formed from HCT116 and used after 5 days of incubation. At this point in time, cell proliferation in the MCS was largely confined to peripheral cell layers (Fig. 5, top left, staining for Ki67) and some spontaneous apoptosis was observed in deeper cell layers (Fig. 5, top right, staining for active caspase-3). Following drug treatment, MCS were fixed, sectioned and stained for active caspase-3. Activation of caspase-3 was observed in MCS after 10 hours of treatment with thaspine, and wide-spread activation after 16 hours of treatment (Fig. 5). Cells in the central portions of MCS did not stain positive for active caspase-3 even at the time of spheroid disintegration. To determine cell survival, spheroids were trypsinized and cells were plated at low density to determine clonogenicity. Clonogenic survival of cells from spheroids treated with 10 mM thaspine was 0.006% of cells from control spheroids. These data show that thaspine treatment was able to kill the cells in the spheroid cores, but that cell death was not by apoptosis. Cisplatin and doxorubicin did not induce widespread apoptosis in HCT116 MCS (Fig. 5). Discussion We here screened a collection of natural products for their capacity to induce apoptosis of colon carcinoma cells. Natural products are known to have a high chemical diversity [4], a necessity for drug discovery in the oncology field [25]. This approach lead to the identification of 20 agents that induced strong increases in the levels of caspase-cleaved cytokeratin 18 in colon carcinoma cells. Several of these compounds are well known to have anti-tumor activity (e.g. podophyllotoxin, daunorubicin, maytansine, rapamycin and rhizoxin). Of the remaining compounds we noted thaspine (taspine), an alkaloid present in the cortex of the South American tree Croton lechleri. Thaspine is of interest since Croton lechleri is used in traditional medicine. A red latex, Dragon's blood, is extracted from the tree cortex and used by tribes of the Amazonian basin for several purposes, including wound healing, as an anti-inflammatory agent, and to treat cancer [26,27]. Thaspine was previously reported to be cytotoxic [26,28,29], anti-angiogenic [30], and to have antitumor activity [28]. Consistent with these previous reports, we found that thaspine treatment induced caspase activation in tumor tissue and release of human caspase-cleaved CK18 from tumor cells into the blood of SCID mice. Our connectivity map analysis showed that thaspine induced a similar gene expression pattern as the topoisomerase inhibitors ellipticine and camptothecin. Direct measurements of enzyme activity confirmed that both topoisomerase I and II were inhibited by relevant concentrations of thaspine. Furthermore, CEM/VM-1 cells, which express a mutated form of topoisomerase II resistant to inhibitors of this enzyme [19,20], showed increased resistance to thaspine. Topoisomerases are enzymes which have important roles in DNA metabolism by adjusting the number of supercoils in the DNA molecule -a key requirement for transcription and The resistance factor was defined as the IC 50 value in the resistant subline divided by that in its parental cell line. The IC 50 of thaspine on the parental cell lines were between 1-3 mM. Topo II, Topoisomerase II; P-gp, P-glycoprotein (ABCB1); MRP, multidrug resistance-associated protein (ABCC1). For details on cell lines and calculations of resistance factors, see [22]. CCRF-CEM are not resistant to camptothecin [21], which was therefore not included. The experiments were repeated twice. doi:10.1371/journal.pone.0007238.t002 replication. Topoisomerase I is capable of introducing single strand breaks in DNA, while topoisomerase II can break both strands. A variety of clinically used anticancer drugs inhibit the action of topoisomerase I [31,32] or topoisomerase II [33]. The topoisomerase I inhibitors topotecan and irinotecan are among the most effective drugs used to treat colorectal, small cell lung and ovarian cancer [31]. Topotecan and irinotecan are chemically unstable and large efforts are being made to develop improved compounds [32]. A large number of compounds have been described to inhibit topoisomerase II, including the important clinical agents doxorubicin/adriamycin and etoposide [33]. A limited number of agents can inhibit both enzymes and may have strong antitumor activity [34]. Some agents such as intoplicine, the acridine XR5000 (DACA) bind to DNA by intercalation [35], others are physically linked inhibitors of topoisomerase I and topoisomerase II. Drug resistance is the most important cause of cancer treatment failure and represents a major challenge to the treatment and eradication of cancer. Drug resistance is known to be multifactorial. One important mechanism of resistance to clinically used DNA damaging anticancer drugs is the expression of ABC transporters such as Pgp and MRP [36]. Thaspine cytotoxicity was only marginally affected by overexpression of the P-glycoprotein (Pgp, ABCB1) or the multidrug resistance-associated protein (MRP, ABCC1). Another mechanism of resistance of solid tumors to anticancer drugs is multicellular-mediated resistance [37]. This form of resistance has been found to influence the effect of adriamycin on solid tumors, partly due to limited drug penetration into the tumor parenchyme [38,39]. Interestingly, thaspine was found to induce wide-spread apoptosis of multicellular spheroids. This property is interesting considering that many clinically used anticancer drugs show limited potency on spheroids, possibly reflecting their limited activity on solid tumors [23,24]. The therapeutical potential of thaspine on solid tumors is therefore interesting to examine. Such studies require optimization of drug formulation (thaspine is hydrophobic; XlogP = 2.8) and evaluation of how thaspine should be combined with other drugs. Only a fraction of the diversity of the biosphere has been tested for biological activity. The approach to screen chemically diverse drug libraries for apoptosis-inducing compounds and to use the Connectivity Map resource [18] for unveiling the mechanisms of action of the compounds identified is reasonably straight-forward. We believe that this approach may be effective in anticancer drug discovery. Compounds The NCI Natural Product Set was obtained from the Developmental Therapeutics Program of the US National Cancer Institute (http://www.dtp.nci.nih.gov). Cell culture and screening HCT116 colon carcinoma cells (from the American Type Culture Collection, ATCC) were maintained in McCoy's 5A modified medium/10% fetal calf serum at 37uC in 5% CO 2 . Multicellular spheroids were formed from HCT116 cells as described [40]. For description and of cell lines MCF-7, CCRF-CEM, CEM/VM1, RPMI 8226, NCI-H69, H69AR see [22]. Cells were seeded in 96 well plates at 10,000 cells per well. Compounds were added to a concentration of 25 mM in a final concentration of 1% DMSO; control wells received 1% DMSO only. After 24 hours of drug treatment, NP40 was added to the culture medium to 0.1% to extract caspase-cleaved CK18 from cells and to include material released to the medium from dead cells. Caspase-cleaved cytokeratin-18 (CK18-Asp396) was determined using 25 ml medium/extract using the M30 CytoDeath ELISA assay (a variant of the M30-ApoptosenseH ELISA [9] developed for in vitro use (Peviva AB, Bromma, Sweden)). Signals were factorised to percent of a staurosporine reference (quadruple wells used on each plate at 1 mM) and compounds that induced a signal .60% of staurosporine were selected for study. Resistance mediated by efflux proteins and mutated topoisomerase II was evaluated using the fluorometric microculture cytotoxicity assay (FMCA) [41]. The resistance factor was defined as the IC 50 value in the resistant subline divided by that in its parental cell line [42]. siRNA transfection. HCT116 cells were transfected in 96 well plates with siRNAs to BH-3 only proteins using HiPerfect transfection reagents as recommended by the manufacturer (all Figure 5. Thaspine induces wide-spread activation of caspase-3 in spheroids. HCT116 spheroids with homogeneous diameters were formed using the hanging drop technique as described [40]. Five days after formation spheroids were compact and contained proliferating cells only in the surface layers (Ki67 staining, top left). Spheroids were treated with drugs for the times indicated, fixed, sectioned and stained for active caspase-3. Thaspine was used at 20 mM, doxorubicin at 20 mM, and cisplatin at 40 mM. doi:10.1371/journal.pone.0007238.g005 reagents from QIAGEN, Hilden, Germany). Pools of 3 different siRNAs were used (final concentration 25 nM) for triplicate wells for each siRNA pool. After 54 hours of incubation, thaspine or solvent was added to the wells of the 96 well plates and incubation was continued for 18 hours. CK18-Asp396 was then determined using the M30 CytoDeath ELISA as described above. Immunological assays Conformational changes resulting in exposure of inaccessible Nterminal epitopes of Bak and Bax were determined using flow cytometry [43]. Conformation-specific antibody to Bak was obtained from Oncogene Research Products) (AM03, clone TC100) and to Bax from BD Bioscieces PharMingen (clone 6A7; San Diego, CA). The increases in accessibility of the epitopes was monitored using the FL1 channel of a FACScalibur flow cytometer (BD, Franklin Lakes, NJ) as previously described [44]. Tumor or spheroid sections were deparaffinized with xylene, rehydrated and microwaved and then incubated over-night with the primary antibodies diluted in 1% (wt/vol) bovine serum albumin and visualized by standard avidin-biotin-peroxidase complex technique (Vector Laboratories, Burlingame, CA, USA). Counterstaining was performed with Mayer's haematoxylin. Antibody against active caspase-3 was from Pharmingen (used 1:50) and antibody to Ki67 (MIB-1) was from Immunotech SA, Marseille, France (used at 1:150). Mitochondrial Membrane Permeability Transition Loss of mitochondrial membrane potential was monitored using tetramethyl-rhodamine ethyl ester (TMRE) from Molecular Probes (Eugene, OR). Fluorescence was measured by flow cytometry. Assay of cytochrome c release HCT116 cells were treated with 10 mM thaspine and harvested at 7 or 16 hours. After washing with PBS, cells were resuspended in 500 mL (10 mM NaCl, 1.5 mM CaCl2, 10 mM Tris, pH 7.5) and incubated for 20 min on an ice bath. After passing 10 times through a 40 gauge needle, 400 mL mitochondrial buffer (210 mM mannitol, 70 mM sucrose, 20 mM Hepes-KOH, pH 7.5, and 1 mM EDTA containing 0.45% bovine serum albumin (BSA)) was added and nuclei were pelleted by centrifugation for 1000 xg for 10 min at +4uC. The supernatant was trasferred to a new tube and centrifuged for 30 min at maximal speed in an Eppendorf centrifuge at +4uC. Supernatant and pellet fractions were analysed by Western blotting (cytochrome c antibody ab33484 from Abcam used at 1:1000). Connectivity Map The Connectivity Map (CMAP) (www.broad.mit.edu/cmap) build 02 contains genome-wide expression data for 1300 compounds (6100 instances, including replicates, different doses and cell lines). We followed the original protocol using MCF-7 breast cancer cells as described by Lamb et al [18]. Briefly, cells were seeded in a 6-well plate at a density of 0.4610 6 cells per well. Cells were left to attach for 24 h, followed by exposure to thaspine at a final concentration of 10 mM, or to vehicle control (DMSO). After 6 h treatment, the cells were washed with PBS and total RNA was prepared using RNeasy miniprep kit (Qiagen, Chats-worth, CA). Starting from two micrograms of total RNA, gene expression analysis was performed using Genome U133 Plus 2.0 Arrays according to the GeneChip Expression Analysis Technical Manual (Rev. 5, Affymetrix Inc., Santa Clara, CA). Raw data was normalized with MAS5 (Affymetrix) and gene expression ratios for drug treated vs. vehicle control cells were calculated to generate lists of regulated genes. Filter criteria were present call for all genes in the treated cell line and an expression cut-off of at least 100 arbitrary expression units. Only probes present on HG U133A were used, for CMAP compatibility. The 40 most up and down regulated genes (i.e. probes) were uploaded and compared to the 6100 instances in the CMAP database, to retrieve a ranked compound list. Raw and normalized expression data have been deposited at Gene Expression Omnibus (http://www.ncbi.nlm. nih.gov/geo/) with accession number GSE13124. Topoisomerase enzyme assays Tests of topoisomerase inhibition were performed using kits from TopoGen Inc. (Port Orange, FLA) according to the instructions of the manufacturer. All incubations were performed in the presence of solvent (DMSO) for 30 minutes at 37uC followed by agarose gel electrophoresis in the absence of ethidium bromide. Gels were stained with ethidium bromide, washed and photographed. Treatment of mouse xenografts and determination of caspase-cleaved CK18 in mouse plasma HCT116 or FaDu tumors were grown as subcutaneous xenografts in SCID mice (4 mice in each group; each injected with 10 6 cells). When tumors had grown to a size of approximately 400 mm 3 (somewhat larger than the generally used size; a larger tumor size leads to better sensitivity using the biomarker assay), mice were injected with thaspine subcutaneously and tumor size measured daily. The dose used (10 mg/kg) was tolerated by the mice (toxicity was observed at 20 mg/kg). FaDu cells express high levels of CK18 which is released to the extracellular compartment and into the blood from dying cells. Mice were sacrificed 48 hours after injection of thaspine and EDTA-plasma was collected. Caspase-cleaved CK18 (CK18-Asp396) was measured in 12.5 mL of plasma using the M30-ApoptosenseH assay (Peviva AB, Bromma). Each sample was mixed with 0.4 mL of heterophilic blocking reagent (HBR-Plus purified, part#3KC579; Scantibodies laboratory Inc, Santee CA, USA). Animal experiments were conducted in full accordance with Swedish governmental statutory regulations on animal welfare under permission from ethical committees (permit N295/06 Stockholms norra djurförsöksetiska nä mnd).
2017-07-25T15:07:03.985Z
2009-10-02T00:00:00.000
{ "year": 2009, "sha1": "9608c1a95ce1e85667b874372f2813632139136f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0007238&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9608c1a95ce1e85667b874372f2813632139136f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
109585372
pes2o/s2orc
v3-fos-license
Comparative analysis of muscle activation patterns between skiing on slopes and on training devices The increasing popularity of ski as winter sport and the difficulties related to its accessibility and environmental conditions led to the development of specific training devices for practise, both for improving technique and for training after injuries. In particular the aim of this work was to study the efficacy of two training devices: Skimagic ® and Skier’s Edge ® , comparing their functionality to ski on natural snow. The efficacy of training devices was investigated after comparing the EMG activation patterns of snow skiing with the other two training conditions; good correlation of activation patterns should correspond to a better simulation of the skiing movement. Introduction Skiing is a very popular sport discipline involving a full body motion in an open environment. The muscular recruitment patterns at the lower limbs are related to the ski discipline, the skill levels and the snow properties: in most cases anti-gravitational muscles are reported to work in eccentric mode [1]. Data acquisition of EMG signals is quite complex in such environment [2] and requires an approach integrated with kinematic and kinetic data to give a real insight of the skiing technique. Despite its popularity, very few skiing training devices are available to athletes and beginners for training away from the slopes or rehabilitating after injuries [3,4,5]. The aim of the present work was to make a comparison of muscle activation patterns during skiing between three different skiing conditions. A slalom skiing on natural snow in a medium steep slope was compared with two training devices: a full scale treadmill (Skimagic®) and a portable home training device (Skier's Edge®). Methods Two expert male skiers (age: 24.9 ± 1.3, weight: 72.5 ± 3.5, height: 170.2 ± 0.3 ) free from recent injury or pain were involved in the study. Subjects are ski instructors since 5 years and well experienced with Skimagic®. The study was composed of three sessions (Fig. 1). • In the first session, after some free runs on ski as a warm-up, subjects were asked to perform a slalom skiing on natural snow on a moderate slope. Each run was composed of 12 turns and was repeated by subjects three times. Short slalom poles (span 1.5 m, pace 8m) were placed on the snow to increase the repeatability between runs. During the data collection weather conditions were good; temperature was very cold and the snow was hard. • The second session took part on Skimagic®. Skimagic® is a full scale (6x6 m) treadmill that runs upward powered by a power supply of 25 kWh, during its operating it needs to be wet to better mimic friction conditions of its surface in comparison to snow. During the session it run at a speed of 22 km/h and its slope was set at 25° to recreate the same conditions of the first session on the snow. After some minutes of free runs as a warm-up, subjects were asked to ski on the device performing a slalom of medium width, to mimic turns around slalom poles. Each run was composed of at least 12 turns. • The third session involved the use of Skier's Edge®. Skier's Edge® is a portable home device that permits the movement of the user along coronal and sagittal planes. Subject is standing on two footboards which can rotate along their longitudinal and medial axes while rubber straps are used to set the intensity of the strength performed by the skier. After some minutes of exercise on the device, with the double aim of acting as a warm-up and getting the subjects familiar with the device, subjects were asked to perform three runs of at least 12 turns for each side. In order to analyze muscle activation patterns, EMG signals and kinematics data at the knee were collected during each session. A PDA-PocketEMG® (145x95x20 mm, 0.3Kg) with 16 channels, placed at the chest of the skier, was used to collect muscle data on both legs. Five muscles were analyzed for each lower limb, the muscles are: Rectus Femoris, Vastus Medialis, Biceps Femoris, Tibialis Anterior and Gastrocnemius Lateralis (they will be named RF, VAM, BFCL, TA and GAL in the following section). A Biometrics® biplanar electrogoniometer was used to collect synchronously the flexionextension and ab-adduction angles at the right knee. All the data were collected at a frequency of 1 KHz. At the end of each session, Maximum Voluntary Contractions (MVC) were performed on all referred muscles with a standardized protocol in use in our Department. EMG raw signals from the 10 muscles were rectified, integrated with a mobile window of 150 ms, filtered with a 5 Hz low pass Butterworth filter and normalized with respect to the MVC of the proper session of data collection. Values of flexion-extension angle of the knee were used to define cycles of skiing: each cycle was defined as a sequence of internal-external-internal turns. In particular, the cycle for the 5 muscles of the right lower limb was defined by two sequential maximum values of the knee flexion angle. Similarly, two sequential minimum values of right flexion angle define the cycle of left limb. Each EMG signal was then averaged across at least 3 cycles of the same run and also across different runs, for each different session. Results The results of the analysis described in previous session for subject 1 are presented as an example in Figures 2-4. In every single picture the three conditions of this study are compared: control condition (skiing on natural snow) is presented as black thick, skiing on Skimagic is in black thin and training on Skier's Edge is in grey thick lines. Figure 2.b presents also the standard deviation to provide an example of its range for the EMG collected. The values collected by the electrogoniometer at right knee during a mean cycle are shown in Figure 2.a (in degrees). All the values are positive because they represent a flexion starting from a neutral position (standing) at 0°. All the other figures represent the mean curves of EMG activation for the analyzed muscles during one cycle with values expressed as % of MVC. For each muscle on both lower limbs two quantities were evaluated to compare the EMG activation patterns among the three conditions under study: these were the Peak activation value (Figure 3.b) and the Mean activation value within a cycle, defined as the average of the activation during the cycle. In particular, values of EMG activity collected on snow session were expressed in % of MVC and considered as the control values; values collected in the two other conditions were expressed in terms of relative differences (in %) with respect to the control condition (skiing on snow). In addition, the differences in timing of the peaks were expressed as Phase Shifts relative to the snow skiing control session. The values indicate the percent of cycle where are the peaks of maximum activation, for the other two conditions the values are the difference (in % of cycle) from this one. In Table 1 and Table 2, results from data analysis related to subject 1 and 2 are presented: values corresponding to the control condition (skiing on snow) are in bold. Table 3 summarizes results of the first two tables: the results of each muscle are expressed as grand mean, averaged across both lower limbs and across subjects. Again, control condition values are expressed in percent of MVC and in bold, while values from the other conditions are expressed as the difference (in %) of activation from control condition. Discussion First of all, it is relevant to notice that functional values of EMG activity collected during skiing can reach higher values than those collected during the MVC, as shown in literature [6]. This happened also in the present study, on some muscles of subject 1, as it can be seen from Table 1 and could have affected the large variability of some results. Data collected from subject 2, in fact, are more consistent between both limbs and between conditions too. In most muscles the peaks of activation appear close to the middle of the cycle: it means that muscles of the leg of interest are doing the greatest effort while the limb is performing an external turn, according with previous studies [2]. This is always true for both subjects for muscles VAM and RF. Despite the last statement it has to be said that subject 2 showed also peaks of activation during internal turns, in some cases higher than those during external turns: this is probably related to his personal skiing technique. Going further in the analysis of peak EMG values, it is possible to say that in both training devices the muscle patterns of activation have similar behaviors, especially for VAM and RF: this can be confirmed looking at the phase shift values. Peaks of activation, indeed, for both the training devices occur very close (in %) to the peaks collected in skiing on snow, and this can be taken as a positive evaluation of such devices. At the same time it is difficult to find some common trends on the other muscles because of a greater variability; a higher mean activation for the control condition seems the general trend for such muscles. A possible interpretation of such plots can be found in the fact that the slalom tests and the corresponding simulation were demanding a mild effort level to the subjects: more clear trends would have been collected on racing slalom courses. EMG signals recorded during control condition present high values of the mean activation during the cycle, as mentioned before. Muscles with a mean activation value closer to the control condition are again VAM and RF for both conditions, whereas the largest differences can be found in the activation of GAL and BFCL. All muscles are less activated during the exercise on Skier's Edge® than on Skimagic® and the difference is quite big if compared to control condition. This could be due to different factors: a lower effort than the one performed during skiing because of the set-up and the lower vibration levels introduced from external environment (snow and treadmill respectively) that can affect muscles. In fact the EMG activity produced by flexors in Skier's Edge® is very low: this is because the plyometric return after the first phase of pushing is mainly passive and performed by the elastic straps of the device. For all these reasons the difference with the control condition (skiing on snow) expressed in terms of total muscle activity resulted to be -39.6% on the Skimagic® treadmill and -56.7% on the Skier's Edge® . These data can provide information basically about muscles training. Further investigation will be focused on collecting kinematics and kinetics data to estimate values of forces and torques at knee joints during skiing on these simulators. These additional analyses will help in better characterizing the devices, looking at their efficacy not only in term of patterns of muscle activation but also in term of articular loads. This could be of interest in the choice of suitable rehabilitative devices for athletes who sustained knee injuries.
2019-04-12T13:56:55.026Z
2010-06-01T00:00:00.000
{ "year": 2010, "sha1": "a4a3a25ba62e0cd32126efc7c5ea33f904677182", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.proeng.2010.04.028", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "80621c3731817df56217552d8e493a14da14bfef", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
263977433
pes2o/s2orc
v3-fos-license
Transcriptome responses to heat and cold stress in prepupae of Trichogramma chilonis Abstract Trichogramma is a useful species that is widely applied in biocontrol. Temperature profoundly affects the commercial application of T. chilonis. Different developmental transcriptomes of prepupae and pupae of T. chilonis under 10, 25, and 40°C were obtained from our previous study. In this study, transcriptomic analysis was further conducted to gain a clear understanding of the molecular changes in the prepupae of T. chilonis under different thermal conditions. A total of 37,295 unigenes were identified from 3 libraries of prepupae of T. chilonis, 17,293 of which were annotated. Differential expression analysis showed that 408 and 108 differentially expressed genes (DEGs) were identified after heat and cold treatment, respectively. Under heat stress, the pathway of protein processing in endoplasmic reticulum was found to be active. Most of the genes involved in this pathway were annotated as lethal (2) essential for life [l(2)efl] and heat shock protein genes (hsps), which were both highly upregulated. Nevertheless, most of the genes involved in another significantly enriched pathway of starch and sucrose metabolism were downregulated, including 1 alpha‐glucosidase gene and 2 beta‐glucuronidase genes. Under cold stress, no significantly enriched pathway was found, and the significantly enriched GO terms were related to the interaction with host and immune defenses. Together, these results provide us with a comprehensive view of the molecular mechanisms of T. chilonis in response to temperature stresses and will provide new insight into the mass rearing and utilization of T. chilonis. | INTRODUC TI ON Temperature is among the most vital abiotic factors that affect the spatial distribution and population abundance of animals (Damos & Savopoulou-Soultani, 2012;Hoffmann et al., 2003;Wang, Fang, et al., 2014). As ectotherms, insects are exposed to and challenged by temperature stress (Ju et al., 2013;Paaijmans et al., 2013;Yee et al., 2017). Thus, the ability to cope with temperature stress is crucial for the survival of insects (Bartlett et al., 2020;Srithiphaphirom et al., 2019). Insect species have developed various adaptation mechanisms to overcome stressful temperatures during their evolution, which enables species richness and diversification around the world (Dennis et al., 2015;González Tokman et al., 2020;Storey & Storey, 2012). To date, adaptive responses to temperature stress have been reported, including behavioral, morphological, physiological, and molecular changes in insects (Hoffmann et al., 2003;Musolin & Saulich, 2012;Sejerkilde et al., 2003). Among them, the gene profile changes quickly and plays a versatile role in responses to temperature stress, contributing to physiological resilience (Buckley et al., 2006;Gleason & Burton, 2015). Genes such as the heat shock protein gene (hsp) (Ritossa, 1962;Zhao & Jones, 2012) and antifreeze protein (atf) (Duman, 1977;Wen et al., 2016) have already been proven to respond to temperature stress, which could maintain the structure and physiological function of insect cells during stress. In recent years, high-throughput sequencing has been widely used to identify genes and to perform expression profiling (Chen et al., 2019;Wang, He, et al., 2014;Wu et al., 2015). It provides large-scale genetic information and could broaden our understanding of the underlying mechanisms in insects (Hegde et al., 2003). Transcriptomic responses to temperature stress have been characterized in certain groups of insects, such as Helicoverpa assulta (Cha & Lee, 2016), Nilaparvata lugens , and Drosophila melanogaster (MacMillan et al., 2016). The adaption mechanism turns out to be a complex progress that involving various molecular changes. Trichogramma chilonis (Hymenoptera: Trichogrammatidae), a tiny egg parasitoid wasp, is widely used in the biological control of numerous lepidopterous pests, including Chilo spp., H. armigera, and Pectinophora gassypiella (Ballal & Singh, 2003;Zhang et al., 2014). It is usually mass-reared within the temperature range from 25°C to 30°C (Dadmal et al., 2010;Hussain et al., 2013). To meet the great demand for pest control, T. chilonis individuals at the prepupal stage were favorable for storage and transport under low temperatures (Yuan et al., 2013). Nevertheless, temperatures beyond the temperature threshold lead to suppression of parasitization, adult emergence, and fertility rates (Haile et al., 2002;Nadeem & Hamed, 2008;Yuan et al., 2012). The ambient temperature could be lower than 0°C in winter and higher than 40°C in summer (Chen et al., 2015;Xiao et al., 2016). Such temperatures could impose various constraints on each process of commercial application of this parasitoid wasp, including mass rearing, storage, and release. Many researchers have noticed and well explored the adverse effects on Trichogramma species under high and cold temperatures (Harrison et al., 1985;Nadeem et al., 2009;Schöller & Hassan, 2001). Although our previous studies have confirmed that hsps are induced by heat stress, the transcriptomic responses to low-temperature and high-temperature stress are still not fully understood (Yi et al., 2018). Recently, we obtained the transcriptomes of prepupae and pupae of T. chilonis exposed to 10, 25, and 40°C for 4 hr and subsequently explored the molecular changes between prepupae and pupae of T. chilonis at 25°C (Liu, Yi, et al., 2020). In the present study, we conducted transcriptome profiling to characterize the transcriptomic response to heat and cold stress in the prepupae of T. chilonis. Based on the obtained transcriptome data, comparison analysis was further performed to identify differentially expressed genes (DEGs). Quantitative real-time PCR (qRT-PCR) was used to examine the thermally responsive DEGs. This study aimed to obtain a comprehensive understanding of the adaptive mechanism of thermal tolerance in T. chilonis. | Insects and temperature exposure The colonies of T. chilonis were obtained from the Plant Protection Research Institute, Guangdong Academy of Agricultural Sciences, People's Republic of China and were reared at 25 ± 1°C, 75 ± 5% relative humidity, and a 14 L:10 D photoperiod (Figure 1). The prepupae of T. chilonis were confirmed based on our previous studies (Liu, Yi, et al., 2020;Yi et al., 2018), and the corresponding parasitized eggs were exposed to three temperatures: 10°C (T1, cold), 25°C (T2, control), and 40°C (T3, heat) for 4 hr. Then, the parasitized eggs were dissected to collect T. chilonis individuals. Individuals were F I G U R E 1 Morphological characteristics of the adult female (a) and male (b) of Trichogramma chilonis collected when the pulm spots appeared on the body. For qRT-PCR analysis, each treatment was repeated three times. Each specimen, containing 50 individuals, was immediately frozen in liquid nitrogen and stored at −80°C. | Transcriptome data We previously obtained the transcriptomes of prepupae and pupae of T. chilonis exposed to 10, 25, and 40°C for 4 hr (Liu, Yi, et al., 2020). Based on the annotated results of 6 transcriptomes, the unigene annotation information from 3 transcriptomes of prepupae was selected and analyzed. | Differential gene expression analysis and functional annotation All clean reads from 3 transcriptomes of prepupae were aligned to the unigene library. The results were used to calculate the expression level through RSEM software (http://dewey lab.biost at.wisc. edu/RSEM). The relative measure of transcript abundance was fragments per kilobase of transcript per million mapped reads (FPKM). The differential expression analysis of unigenes was conducted between the control (25°C) and the treatments (10°C or 40°C). The false discovery rate (FDR) <0.01 and log 2 fold change (FC) ≥1 were set as the thresholds to screen out the DEGs. GO and KEGG enrichment analyses were applied to determine the significantly enriched GO terms and KEGG terms of DEGs. | Quantitative real-time PCR analysis To confirm the results of differential expression analysis, a total of 11 enriched DEGs were selected for qRT-PCR analysis. Glyceraldehyde-3-phosphate dehydrogenase (gapdh) was used as the control. Primers were designed by Primer Premier 5.0 and are displayed in Table S1. The total RNA of each group was extracted using TRIzol reagent (Invitrogen according to the manufacturer's protocol. PrimeScriptRT reagent Kit (TaKaRa) was used to synthesize cDNA. The qRT-PCR was carried out in a LightCycler ® 480 Real-time PCR system (Roche Diagnostics Ltd) using SYBR Green I Master (Roche Diagnostics Ltd.. The results were used to calculate the relative expression levels of chosen genes through the 2 −ΔΔCt method. | Overview of the transcriptome in prepupae of T. chilonis A total of 9.04 Gb bases were obtained from 3 transcriptomes of T. chilonis prepupae. The clean reads, Q30, and GC content of each library were over 14,721,157, 92.24%, and 45.95%, respectively, as presented in our previous study (Liu, Yi, et al., 2020). Finally, 37,295 unigenes were identified from 3 libraries ( Figure S1). Only 9 DEGs were both upregulated in T2 versus T1 and T2 versus T3, of which 4 DEGs were annotated as lethal (2) | Functional analysis of DEGs in heat-stressed T. chilonis Gene Ontology (GO) enrichment analysis for the DEGs was performed in prepupae under heat conditions. Of 408 DEGs, 52 upregulated and 65 downregulated DEGs were annotated in the GO database (Table S2). These DEGs were assigned to 39 GO terms. Only 3 DEGs in T2 versus T1 could be assigned to 6 pathways ( Figure 3b). Two DEGs, namely CL5074Contig1 and CL1586Contig1, were downregulated and involved in metabolic pathways. CL11101Contig1 was the particularly upregulated DEG and participated in "protein processing in endoplasmic reticulum." This DEG was annotated for l(2)efl, which was assigned to the GO term of response to stress by GO enrichment analysis. | Validation of DEG data by qRT-PCR The qRT-PCR results of 11 DEGs are presented in Figure 4. Among these genes, CL11101Contig1 was upregulated after heat and cold stress. During heat stress, the qRT-PCR results were consistent with the DEG data. During cold stress, although the qRT-PCR results represented lower expression changes, the changing trend was similar to that of the DEG data. Figure S1). The DEGs obtained from T2 versus T1 and T2 versus T3 are diverse. Only 84 DEGs were annotated during cold exposure, whereas 307 annotated DEGs were obtained during heat exposure (Table 2). In addition, the upregulated DEGs were predominant in T2 versus T3, which is the opposite of the results from T2 versus T1. Different transcriptional profiles were also observed in three rice planthopper species under low and high temperatures in which heat treatment induced more DEGs than cold treatment in Laodelphax striatellus . After heat and cold treatment, the "protein processing in endoplasmic reticulum" pathway was found in T2 versus T1 and was significantly enriched in T2 versus T3 (Figure 3). It was also found in Monochamus alternatus (Li et al., 2019), Cryptolaemus montrouzieri , and Cnaphalocrocis medinalis (Quan et al., 2020) under heat conditions. This pathway favors the correct folding of proteins or degradation of misfolded proteins (Chu et al., 2020;Huang et al., 2018). Here, we found 12 DEGs involved in the "protein processing in endoplasmic reticulum" pathway after heat treatment, most of which were upregulated (Table S2). Among these, 5 genes of l(2)efl (CL11101Contig1, CL13371Contig1, CL2202Contig1, CL5154Contig1, CL5675Contig1) and 4 hsps (CL10548Contig1, CL8830Contig1, Group1_Unigene_BMK.11524, Group1_Unigene_BMK.8427) were identified and were also enriched in the GO term "response to stress," suggesting that they may play an important role in the response to heat stress. In many insects, hsps are believed to respond to temperature stress (Cheng et al., 2016;Huang et al., 2009;Zhao & Jones, 2012). Although different hsps may be induced in different insects or under different treatments, they usually act as molecular chaperones to protect F I G U R E 3 KEGG enrichment analysis of differentially expressed genes (DEGs) in T2 vs. T3 (a) and T2 vs. T1 (b). T1 is lowtemperature group; T2 is control group; T3 is high-temperature group proteins from aggregation and misfolding (Jiang et al., 2012;King & MacRae, 2015;Shi et al., 2013). Another kind of DEGs, l(2)efls, are small heat shock-related genes in response to stress. These genes are considered homologous to small hsps in D. melanogaster (Chang & Geib, 2018;Kyriakis et al., 1994). They were also found in Aedes aegypti (Runtuwene et al., 2020) and A. mellifera (Zaluski et al., 2020). In this study, we found 8 different variants in T2 versus T3 and T2 versus T1 (Table 4). They were upregulated not only in T2 versus T3 but also in T2 versus T1, implying that l(2) efls are important for thermal adaption in prepupae of T. chilonis. -↑ Note: "↑" means that the unigene is upregulated; "-" means that the unigene is not differentially expressed. TA B L E 4 Differentially expressed HSP family genes and l(2)efls under heat and cold stress F I G U R E 4 Comparison of fold change in gene expression from DEGs and qRT-PCR results These results indicate that genes involved in "protein processing in endoplasmic reticulum" may contribute to the repair or removal of proteins damaged by temperature stresses, which is crucial for thermal tolerance in T. chilonis. The pathway "starch and sucrose metabolism" was another significantly enriched term after heat treatment. Similar to Glyphodes pyloalis (Liu et al., 2017), the involved DEGs were partially downregulated. Among these DEGs, 1 alpha-glucosidase gene (CL3277Contig1) and 2 beta-glucuronidase genes (Group1_ Unigene_BMK.17521 and Group2_Unigene_BMK.18083) were downregulated, indicating repression of carbohydrate metabolism, which may contribute to heat tolerance (Table S2) (Belhadj Slimen et al., 2016;Liu et al., 2017). Interestingly, a trehalase isoform 1 gene (CL1268Contig1) was upregulated after heat treatment, which is used to encode the hydrolyzed enzyme of trehalose (Shukla et al., 2016). In Galeruca daurica, this kind of gene was upregulated in summer diapause (Chen et al., 2018). In addition, this gene was also found to play an important role in insect development in Yu's study, in which inhibition of trehalase affected the trehalose and chitin metabolism pathways in Diaphorina citri (Hemiptera: Psyllidae) (Yu et al., 2020). Nevertheless, its physiological role in the thermal tolerance of T. chilonis still needs to be further characterized. After cold treatment, 6 pathways were enriched in T2 versus T1 with a corrected p-value > .05 ( Figure 3). However, we found 6 significantly enriched GO terms in T2 versus T1, including "host cell part," "interaction with host," "cell-cell adhesion," "defense response to fungus," "defense response to bacterium," and "positive regulation of DNA binding" (Table 3). Most of the involved DEGs were downregulated, suggesting that cold stress may alter the fitness and immune defenses of this parasitoid wasp (Iltis et al., 2020). Further analysis revealed that 2 DEGs were hypothetical proteins (Group 1_Unigene_BMK.9173, CL7903Contig1) (Table S2). Interestingly, 1 downregulated DEG was annotated as a heat shock factor protein (Group 1_Unigene_BMK.9733), and no hsp was found in T2 versus T1, which was inconsistent with other reports (Kashash et al., 2019;Liu, Han, et al., 2020;Wang et al., 2017). Our previous study revealed that exposure to 10°C could not induce the expression of hsps in T. chilonis (Yi et al., 2018). This may be an additional molecular evidence that the prepupae of T. chilonis exhibited low intensity in response to 4 hr of cold exposure to 10°C. Temperature stresses also induced a large variety of unigenes that had no clear functional classification. In our study, after cold or heat treatment, cuticular protein LCP family member precursors were positively expressed (Table S3). This has been confirmed in stick insects and D. melanogaster (Dennis et al., 2015;MacMillan et al., 2016). During cold, the alteration of cuticles may contribute to avoiding inoculative freezing (Dennis et al., 2015). Two BAG domaincontaining protein Samui-like genes were also upregulated. They act as heat shock protein cochaperones, regulating the activity and effectiveness of HSPs (Lancaster et al., 2016). Thus, increasing the expression levels of these unigenes may also be beneficial for avoiding damage caused by thermal stresses in the prepupae of T. chilonis, as has been reported in Bombyx mori (King & MacRae, 2015). | CON CLUS ION In summary, this study represents the first report on transcriptomic response to thermal stresses in prepupal T. chilonis. Transcriptional changes were different under heat and cold stress, with 408 and 108 DEGs in the two treatments, respectively. After heat treatment, a large number of DEGs were significantly enriched in pathways of "protein processing in endoplasmic reticulum" and "protein processing in endoplasmic reticulum," such as hsps, l(2)efls, betaglucuronidase genes, and 1 alpha-glucosidase gene. They may play an important role in the response to heat stress. However, after cold treatment, no significantly enriched pathway was observed. The GO enrichment analysis showed that a few DEGs were enriched in terms related to interactions with host and immune defenses. These results suggested that cold exposure to 10°C for 4 hr may alter the fitness and immune defenses of T. chilonis but not devastating prepupae of this parasitoid wasp. Overall, this work provided valuable information for a comprehensive view of the molecular mechanisms of T. chilonis in response to temperature stresses. CO N FLI C T O F I NTE R E S T The authors have declared that they have no competing interests.
2021-05-05T00:09:40.459Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "ff5fa1c9450e63a2f209f2325182d4172f4872ac", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8093697", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0356dff6c1257e2022aa3edc0fd0a8bc7b2bb3f3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252600286
pes2o/s2orc
v3-fos-license
Optical emissivity dataset of multi-material heterogeneous designs generated with automated figure extraction Optical device design is typically an iterative optimization process based on a good initial guess from prior reports. Optical properties databases are useful in this process but difficult to compile because their parsing requires finding relevant papers and manually converting graphical emissivity curves to data tables. Here, we present two contributions: one is a dataset of thermal emissivity records with design-related parameters, and the other is a software tool for automated colored curve data extraction from scientific plots. We manually collected 64 papers with 176 figures reporting thermal emissivity and automatically retrieved 153 colored curve data records. The automated figure analysis software pipeline uses Faster R-CNN for axes and legend object detection, EasyOCR for axes numbering recognition, and k-means clustering for colored curve retrieval. Additionally, we manually extracted geometry, materials, and method information from the text to add necessary metadata to each emissivity curve. Finally, we analyzed the dataset to determine the dominant classes of emissivity curves and determine the underlying design parameters leading to a type of emissivity profile. 1 Automated retrieval of design-related parameters from text Automatic tools were insufficient to obtain the desired design-related attributes from the text. Figure captions and in-text figure-referring descriptions were easy to locate but lacked much data. To examine how much information we could automatically retrieve from them, we processed the collected captions and descriptions with the Lawrence Berkeley National Lab Natural Language Processing (LBNLP) package [1]. LBNLP includes standard text mining tools for materials science and chemistry using the pre-trained inorganic materials model [2] for Named Entity Recognition [3] via Long Short Term Memory neural network [4]. The algorithm identifies materials, properties, applications, phases, structure descriptors, synthesis, and characterization methods assigning corresponding tags to the words. However, LBNLP is not specific to the optics domain. Applied to our data, LBNLP highlighted descriptors quite broadly, complicating future studies. Materials and geometries found by LBNLP significantly differed from the manual check for each record. Automatic extraction mistakenly defined a silicon carbide film as the most used design. However, this approach managed to highlight tungsten and tantalum slabs with a 2D array of cylindrical cavities on the surface. Algorithmic approach to axes regions identification We used computer vision algorithms implemented in the OpenCV [5] package to localize axes lines on scientific plots. We tested two approaches. First was the Canny edge detection [6] combined with polygon approximation [5]. It detected axes box, drawing the rectangular on top of the four axes lines (all of which had to be present on the plot for this approach to work). It located the axes box on 63% of figures, and all of them were correct (sometimes slightly displaced). The second approach used Canny edge detection combined with the Probabilistic Hough line transform [7]. It detected each axes line separately, aiming to locate x-axis and y-axis lines. This approach found lines on 92% of figures from our set, but it made mistakes such as confusing grid lines with axes lines. Then, we applied both approaches sequentially and correctly found axes lines for 95% of figures in the dataset. The localization of axis lines required a large amount of custom code, leaving out the numbering and ticks detection. Also, this approach is unreliable as different images will likely result in new issues to handle. All in all, the traditional methods performed in a manner that suggested they would not be robust to future changes. Ticks location during the automated axes scale parsing During the development of the automated axis scale pipeline, we used the EasyOCR [8] package for axes numbering detection and recognition and assumed that ticks were located at the center of the detected number box. This assumption is valid if two conditions are satisfied: (i) EasyOCR must correctly localize the number region with a tight box; (ii) ticks must be centered to numbers. A manual check proved that in most cases, both conditions were true. Figure 1 shows some examples of axes scale parsing with the original axes region and the result of automated axes detection next to each other. Assumed (green) ticks closely match with the original ticks. Image color decomposition: other methods and our approach We noticed that most of the existing solutions, such as Color Thief [9] tool, Scikit-learn [10] k-means package, Dominant Color Detection [11], missed the colors. Figure 2 shows the incomplete palettes produced by each of the listed methods. Also, the palette changed every time we applied the mentioned method. We assumed that the issue was caused by the random initialization of the color centers. We have mostly white images, and it is statistically difficult to get complete diversity of colors in one random set. We have adjusted the k-means algorithm initialization, forcing it to start with the eight color cluster centers representing a combination of RGB and CMYK modes: white, red, green, blue, cyan, magenta, yellow, and black. Then, we iteratively updated the palette, checking the distance between every pixel and color cluster centers (L2 norm in RGB space). Also, we allowed dropping the empty clusters. This modification resulted in a correct steady set of color centers for each image. Figure 2 shows the palette obtained with our algorithm. 5 Search for the best parameters for unsupervised clustering of curve profiles with DBSCAN DBSCAN method [10] has several parameters to adjust. First, eps -the maximum distance between two objects for being considered as neighbors. Second, min samples -the neighborhood's minimum number of members (or total weight). The third is the metric for distance matrix calculation. The metric did not significantly influence our result, so we set it to be Euclidian and focused on searching for the best values of eps and min samples. We analyzed how the number of clusters and the number of unclustered curves (noise) depend on these parameters. When eps was less than one, only a small portion of samples was clustered, and the noise cluster was large. An increase in eps increased the number of clusters, reducing the noise. However, a further increase in eps reduced the number of clusters as the extracted groups started to concatenate. There was no noise when eps was equal to five, and all entries were put in a single cluster. All in all, we determined that the best values for parameters were eps = 2.6 and min samples = 5, which produced 7 clusters leaving half of the curves as noise. Notably, min samples influenced the number of clusters much stronger than the noise volume. Supposedly, a change in cluster size did not involve new samples to be clustered, simply refining the existing clustering. We map the clusters with the package UMAP [10] in Figure 3. One dot corresponds to a curve; colors correspond to the cluster labels. Although the axes units are noninterpretable, all clusters are well defined in this mapping. Curves left out as noise by DBSCAN Unsupervised clustering with the DBSCAN algorithm found groups of similar behavior among half of the records, labeling the other half as noise. We put the noise curves into a single class. Figure 4 demonstrates that the noise class curves have a variety of profiles. Possible values for various keys in data records The dataset of thermal emissivity records with metadata is represented as JSON files with various keys. Table 1 provides possible values for the keys in data records. The content is not limited for some keys, and the value can list any number of descriptors. Other keys can have only one value out of a fixed set. Regarding the unlimited values, key "geometry" stores all keywords used in the source paper for the characterization of the geometry, which we considered to be descriptive. Also, under key "materials", we listed all materials used in the device. In contrast, we put a single value from the chosen set for "composition key" and "geometry key". Key "data type" can have one of two values, but the key "tool" lists all mentioned methods. Key "comment" is for any important notes; key "info on image" stores information from a figure given as a text comment or in an inset. Key "color" provides a HEX color code of the cluster center found with the automated curve data extraction algorithm. Key "score" contains value of a quality score estimated during technical validation with values from 0 to 1. 8 Rinsing text off with OCR algorithms Some figures have the text comment of the same color as the curve, and in these cases, one color channel contains both curve and comment information. The comments are of very different content: sometimes it is a word from the standard English language, sometimes it is a sequence of Greek letters or an equation with numbers and mathematical operators. In an attempt to remove the text, we used EasyOCR [8] package allowing any English and Latin letter as well as numbers and symbols. EasyOCR detected text comments but often returned meaningless messages due to the high diversity of allowed symbols. Also, for the curves of complicated behavior, EasyOCR mistakenly detected portions of the curve as text. Figure 5 shows an example of detection with incorrectly recognized text comments and letters "I" and "M" assigned to oscillating parts of curve. Decision tree for metadata analysis To better understand the role of metadata for the curve classes, we trained the decision tree with Scikitlearn library [10]. For the geometry, we used a numerical encoding: 0 -film, 1 -1D grating, 2 -bull's eye, 3 -2D grating. The other geometry types were present only in the noise class. We applied a binary encoding for the composition: 0 -single material, 1 -sandwich. For the material list, we used one-hot encoding with the number 1 if the material was present in the structure and 0 if it was absent. After training, the decision tree had an accuracy of 0.86 on five-fold cross-validation. We chose entropy as a splitting criterion because the goal was to determine what parameter had the primary influence on the splitting and led to more information gain. Figure 6 shows the obtained decision tree. Geometry and choice of metals have a major impact on an emissivity curve profile. The decision tree algorithm firstly branches on the geometry, checking if the design has 2D periodicity on the surface. This split contains a significant information gain towards clusterization as the entropy value is decreased by one-third. With 2D periodicity (following the right arrow), the presence of tantalum or tungsten leads to the split into classes 4, 5, and 7. Class 6 is present all over the tree, so we neglect it from the consideration here. Following the left arrow, the tree goes deeper into geometry details, checking if the surface is flat or has grating. Class 1 mainly has "bull's eye" surface grating, while classes 2 and 3 are films with flat surfaces. This analysis corroborates the idea that geometry plays first. Nine electronic paper scrapers To check the presence of the keywords in figure captions, we implemented nine electronic paper scrapers for nine publishers. We were working with HTML versions of full-text papers by means of regular expressions. First, we manually determined unique HTML sequences used by each publisher to encode figure object and figure caption, we list them in Table 2. We chose HTML sequences in the way that the part of electronic paper from the start-sequence to the end-sequence contained only the caption sentence and other HTML encoding tags. Next, in every HTML paper, we located all start and end sequences with regular expressions and matched them in pairs combining start-sequence with the first following end-sequence. Then, we extracted paper parts for every start-end pair. As none of the HTML encoding tags contains "emissivity", "emitter" or "emission", we used regular expressions to check these paper parts for the presence of the keywords without any further processing. The algorithm took 2 days to run on the database of 4.9 million papers obtained through special publisher agreement.
2022-09-30T13:30:22.324Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "7ba3cc37573a98fad1628a36159e24642ddd19e0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1cf6f8ef4125740418296d7572ca85f3472f5a22", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
14751603
pes2o/s2orc
v3-fos-license
Cost of intervention delivery in a lifestyle weight loss trial in type 2 diabetes: results from the Look AHEAD clinical trial Summary Objective The Action for Health in Diabetes (Look AHEAD) trial was a randomized controlled clinical trial to compare the effects of 10 years of intensive lifestyle intervention (ILI) with a control condition of diabetes support and education (DSE) on health outcomes in over 5,000 participants with type 2 diabetes. The ILI had significantly greater weight losses than DSE throughout the trial. The goal of this analysis is to describe the cost of delivering the intervention. Methods The ILI was designed to promote weight loss and increase physical activity. It involved a combination of group plus individual intervention sessions, with decreasing frequency of contact over the 10 years. The intervention incorporated a variety of strategies, including meal replacement products, to improve weight loss outcomes. The costs of intervention delivery were derived from staff surveys of effort and from records of intervention materials from the 16 US academic clinical trial sites. Costs were calculated from the payer perspective and presented in 2012 dollars. Results During the first year, when intervention delivery was most intensive, the annual cost of intervention delivery, averaged (standard deviation) across clinical sites, was $2,864.6 ($513.3) per ILI participant compared with $202.4 ($76.6) per DSE participant. As intervention intensity declined, costs decreased, such that from years 5 to 9 of the trial, the annual cost of intervention was $1,119.8 ($227.7) per ILI participant and $102.9 ($33.0) per DSE participant. Staffing accounted for the majority of costs throughout the trial, with meal replacements and materials to promote adherence accounting for smaller shares. Conclusions The sustained weight losses produced by the Look AHEAD intervention were supported by intervention costs that were within the range of other weight loss programmes. Future work will include an evaluation of the cost‐effectiveness of the ILI and will contain additional follow‐up data. Objective The Action for Health in Diabetes (Look AHEAD) trial was a randomized controlled clinical trial to compare the effects of 10 years of intensive lifestyle intervention (ILI) with a control condition of diabetes support and education (DSE) on health outcomes in over 5,000 participants with type 2 diabetes. The ILI had significantly greater weight losses than DSE throughout the trial. The goal of this analysis is to describe the cost of delivering the intervention. Methods The ILI was designed to promote weight loss and increase physical activity. It involved a combination of group plus individual intervention sessions, with decreasing frequency of contact over the 10 years. The intervention incorporated a variety of strategies, including meal replacement products, to improve weight loss outcomes. The costs of intervention delivery were derived from staff surveys of effort and from records of intervention materials from the 16 US academic clinical trial sites. Costs were calculated from the payer perspective and presented in 2012 dollars. Introduction Recent clinical trials have shown that lifestyle interventions using behavioural counseling to induce weight loss and increase physical activity can have important health benefits (1,2). The Diabetes Prevention Program (DPP), e.g. lifestyle intervention, reduced the risk of developing diabetes by 58% compared with a placebo intervention (3). In Look AHEAD, intensive lifestyle intervention (ILI) did not reduce cardiovascular morbidity and mortality in persons who were overweight or obese with type 2 diabetes (4), but it produced improvements in sleep apnea, incontinence and erectile dysfunction, reduced the incidence of high-risk kidney disease and resulted in better long-term diabetes control and some remission of diabetes, as well as fewer hospitalizations, compared with the control group (5)(6)(7)(8)(9)(10). An important concern often raised (11) is that these interventions are costly to provide: data are needed on the costs involved in offering different types of weight loss programmes. Such information will serve as the basis for cost-effectiveness analyses that will inform decisions about the types of lifestyle interventions that should be offered in clinical and public health approaches to obesity. The cost of offering weight loss programmes is highly variable. A recent review of commercial programmes provided information about monthly costs of programmes for participants (12). Whereas participation in self-directed programmes may be free or cost less than $20 per month, weight loss programmes that include counseling, meal replacement products and medical monitoring cost $400-$700 per month. Similarly, although the lifestyle intervention provided in the DPP study was estimated (in year 2000 dollars) to cost $1,399 in year 1 ($162 per month) (13), the DPP has been translated into community-based programmes (e.g. YMCA) that typically last 1 year and cost $400 to $600 per participant (or $33-$60 per month) (14). Clearly, the variability in these costs relates to the intensity of the programme (i.e. the number of sessions and whether they are offered in groups or individually), the type of staff used to deliver the programme (e.g. peers, nutritionists and physicians) and whether food products are provided. Costs also vary depending on what is included in the cost calculations. Some estimates focus on costs assessed from the perspective of the payer (which include labor costs and may or may not also include costs for renting space and for intervention materials). Other estimates target societal costs, which include both costs from the perspective of the payer and costs from the participant perspective (such as costs for the time spent in intervention sessions, time spent exercising and/or travel time). Costs of commercial programmes to the participant also include a profit margin to sustain the business model. This paper presents an analysis of the costs involved in delivering the lifestyle intervention in Look AHEAD (15), a large multicentre clinical trial with an intensive weight loss and exercise intervention. The costs of delivering the ILI and its control conditions from the payer perspective are described. Costs associated with staffing group and individual sessions, providing intervention materials and supplying meal replacement products are differentiated. Methodological issues involved in estimating direct costs, especially costs related to space for conducting these interventions, are also discussed. Research design and methods Look AHEAD was a multicentre, randomized controlled trial in individuals who were overweight or obese with type 2 diabetes that evaluated the effect of an ILI focused on weight loss and physical activity relative to a control condition. The primary outcome was incidence of major cardiovascular events (4). Secondary outcomes included many other markers of health (15). To be eligible for enrollment, participants were aged 45-76 years, with a body mass index of at least 25 kg m À2 (27 kg m À2 if using insulin), HbA 1c < 11%, systolic blood pressure < 160 mmHg, diastolic blood pressure < 100 mmHg and triglycerides-600 mg dL À1 (15). They underwent a maximal graded exercise test (to ensure that exercise could be safely prescribed) and completed 2 weeks of monitoring food intake and physical activity. They were then randomly assigned, with equal probability, to either the ILI or the control condition, referred to as diabetes support and education (DSE). Participants were enrolled between 2001 and 2004. All informed consent procedures were approved by local Institutional Review Boards, and the consent forms were signed by the participants. The trial was registered at ClinicalTrials.gov Identifier: NCT00017953. The interventions continued through September 2012. Detailed descriptions of both the ILI and DSE interventions have been published elsewhere (16,17). A brief synopsis follows. Intensive lifestyle intervention The ILI was designed to achieve and sustain an average loss of 7% or more of initial weight, primarily through an intensive regimen including diet modification and increased physical activity (16). During the first 6 months of ILI, participants attended three group meetings and one individual session per month. For the remainder of the first year, participants were provided two groups and one individual meeting per month. In months 13-48, participants attended monthly individual meetings that were followed approximately 14 days later with phone calls or e-mails from interventionists. Optional monthly group meetings were also offered during these latter years. The intervention sessions were typically led by registered dietitians (RDs) or exercise specialists. Individual sessions were planned to last about 30 min and usually were provided by a single interventionist. Group classes were often conducted by two or more staff members who might include a lifestyle interventionist and a research assistant or a combination of an RD and an exercise specialist. These sessions were scheduled for 60-90 min and were offered at several different times (day and evening sessions) to accommodate participants' schedules. Depending on the site, group classes varied from fewer than 10 participants to up to 20 members. The ILI participants were asked to keep daily records of their food intake and physical activity. They were also instructed to attempt to reach behaviour and activity goals (described next) and turn in their diary records to the intervention staff at scheduled meetings. Intervention staff provided individualized feedback. The dietary goal of ILI participants included a calorie and fat gram prescription and use of meal replacement products to help participants adhere to their calorie goals. Goals were individualized on the basis of body weight. During the first 4 months, participants were provided servings of liquid meal replacement (e.g. Slim Fast, HMR, OPTIFAST and Ensure) to replace two meals and one snack per day. The physical activity component of the ILI consisted mostly of home-based exercises with a goal of 175 min of moderate-intensity physical activity per week. Lifestyle strategies were provided to facilitate adherence to diet and activity goals. Beginning in month 7, a 'toolbox' algorithm was implemented: participants who had not lost 5% of initial weight in the first 6 months were offered more advanced behavioural strategies or the optional use of a weight loss medication (orlistat) (16). Diabetes support and education The DSE intervention was designed to retain participants in the trial and consisted of educational sessions focused on diet, physical activity and social support (17). Four meetings were offered in year 1, three per year in years 2-4 and one meeting per year thereafter. Attendance at these meetings was optional. Each meeting lasted 90-120 min and was typically taught by a team that might include an RD, an exercise specialist, and a nurse educator or behaviour therapist. Different team members attended different sessions, on the basis of the topics covered. Each session was offered at several different times (daytime and evenings) to accommodate participants' schedules. Assessment of costs involved in intensive lifestyle intervention and diabetes support and education delivery The data used to estimate personnel costs came from two sources: (i) salary data from sites (year 2007) and (ii) periodic surveys of sites (see Appendix 1 for further details on timing of surveys and Appendix 2 for common staffing types). The year 2007 was used as a benchmark because it was the midpoint of intervention delivery from 2001 to 2012. The survey queried the number of specific types of staff members at the centre (e.g. the number of RDs), the percent of individual or group lifestyle sessions that were conducted by this type of staff member (by an RD) and the time spent in delivery. Although extensive training was done to ensure proper completion of these surveys, a review of the data suggested that sites interpreted the questions in different ways. For example, among sites that had three RDs and where individual sessions were always conducted by an RD, sites differed in reporting that each RD conducted 33% of the individual sessions or 100% of the individual visits. The latter led to the unlikely conclusion that three RDs were present at each individual session. To clarify the staffing patterns that were used, a biostatistician at the Staffing models and time allocations were used to project personnel costs of delivering ILI and DSE. Researchrelated staffing costs and facility (building/rent) costs were excluded, but both time spent preparing and delivering sessions were considered. Actual salaries and fringe benefits for each type of staff member (e.g. RD, exercise specialist and nurse) were obtained from each site and were used to calculate personnel costs. From a separate database, we also determined the number and type of visits attended by each participant and for those in ILI, their use of meal replacement products and orlistat. Although the study received meal replacement products and orlistat free of charge (from institutional donors), the costs that would have been associated with these products in typical programmes were used. Differences in the costs (in 2012 dollars) of delivery between the ILI and DSE interventions over 9 years of follow-up were examined. This cost estimate will be useful later in reporting the cost-effectiveness of the Look AHEAD intervention. Statistical methods Staffing effort was collected beginning in 2001. Salary data (including fringe costs) for these efforts were obtained in 2007 dollars (to adopt a common benchmark within the time span of the intervention). For consistency and comparison to a previous Look AHEAD publication (7), salaries were inflation-adjusted by 3% per year to be in 2012 dollars. Staff salaries associated with participant visits that occurred in 2001/2002 used this 2012 inflation-adjusted salary, whereas those associated with participant visits in 2003 and beyond not only received the inflation adjustment to 2012 but also received a 3% salary increase for each year beyond 2002. Annual staffing costs per participant in Table 4 were obtained by first summing costs within individual and year. Next, these sums were averaged within clinic, and finally, an across-clinic average (and standard error) was obtained. Meal replacement cost data were obtained at the clinic level by year. Costs were distributed equally among all ILI participants at the clinic during that year. Use of orlistat was collected on a per-participant basis. The cost of orlistat was summed across all participants within site and year and was then distributed equally among all participants. Toolbox funds were dollars made available to clinics to assist participants struggling with weight loss (ILI) and were used to provide DSE participants with educational and other materials to aid in diabetes management. Results Participants' baseline characteristics and subsequent weight loss Table 1 describes the demographic characteristics of participants. Nearly 60% of participants were women, and approximately one-third were from racial and ethnic minorities. As reported previously, ILI and DSE participants lost an average of 8.6% and 0.7% of initial weight, respectively, at year 1. Mean losses from baseline were 4.7% and 1.1%, respectively, at year 4 and 6.0% and 3.5%, respectively, at an average of 9.6 years of followup, when the intervention was terminated. Table 3 displays the annual costs of such items as meal replacements, orlistat, donations and toolbox items by treatment group. For DSE participants, non-staffing costs were limited to donations and toolbox items alone (i.e. diabetes-related educational materials) and averaged $86 per participant in year 1 and $45 per participant in subsequent years. Non-staffing costs For ILI participants, non-staffing costs included meal replacements, orlistat, donations and toolbox items (including small exercise and diet equipment) (Appendix 3). Of these, meal replacements were the major contributor to cost. Non-staffing costs were approximately $1,322 in year 1, declining to $529 in later years. Figure 1 shows the average per participant cost of the ILI and the DSE programmes during each year of the intervention. Costs were higher for ILI participants, especially during year 1. In year 1, the cost per participant per kg lost was $329.30. Conclusions This paper provides detailed information about the cost of offering the ILI in Look AHEAD and important data about the treatment components that contributed to this cost. The cost of offering ILI (including staffing costs for individual session and group sessions and costs for meal replacement products and intervention supplies) was $2,865 per participant in year 1, $1,944 in year 2 and decreased gradually to $1,120 in years 5-9. In contrast, the DSE programme cost (including staffing and retention items) was only $202 per participant in year 1, $136 in year 2 and decreased to $103 in years 5 to 9. The major contributor to this difference was the intensity of the treatment contact: ILI participants were offered 12 individual and 30 group sessions in year 1, as compared with only four group sessions for the DSE group. The primary component of the cost of the ILI was the staffing, which accounted for 60-70% of the per capita annual cost across years. The variability between clinics in the total annual cost of conducting the interventions likely reflects differences in the type and number of staff members that were used to conduct the sessions and differences in attendance at these sessions. The Look AHEAD lifestyle intervention included both group and individual sessions in an effort to capitalize on the strengths of these two different approaches. Group treatment programmes have produced better weight losses than individual programme (18), and thus, our programme relied primarily on group sessions, especially in year 1 of the programme when contacts were most frequent. Although group sessions were longer in duration and required more staff members than individual sessions, the fact that the costs are spread across many different individuals significantly reduces the per person cost. During year 1 of Look AHEAD, the per participant cost of an individual session was more than twice as much as a group class; however, by year 4, the study-wide costs of group and individual sessions were similar because of the lower attendance at group sessions in later years. On the basis of the findings from the literature, use of meal replacement products was included in our lifestyle intervention to promote greater weight loss and maintenance (19). Although greater use of meal replacement products was associated with better outcomes (20), it is likely that those who use the meal replacement products were also adhering to other aspects of the intervention. Although meal replacement products were donated to the trial, if purchased, they would have cost $798 and $650, respectively, per participant in years 1 and 2, $513 in year 3, $366 in year 4 and $207 per participant in later years, because of the decreasing use of these items over time. Costs of lifestyle intervention supplies, including items such as food scales and measuring cups, and exercise tools such as pedometers, fit stability balls and walking tapes, were estimated. These supplies cost approximately $300 per participant per year. It is unclear if these items were related to weight loss outcomes, but they are believed to contribute to retaining interest in the trial and motivating participants to continue to change eating and exercise behaviours. Although cost of purchasing or renting space is often included in the direct cost calculations for a programme, this was not included in our calculation because of the difficulty estimating this parameter, both for group and individual sessions. In many programmes, space is rented for each intervention session, and a rental cost (e.g. $50 per room per hour) may be used to estimate this parameter. For example, Jakicic et al. conducted a group-based lifestyle programme, with 42 group sessions plus individual make-up meetings over 18 months. Their total per capita cost of space was estimated at $122 (21). In other studies, a general estimate of overhead costs is provided by using a percentage of the personnel costs; using this approach, DPP estimated that the per capita cost of the lifestyle intervention was 69% of the cost of personnel or $519 during year 1 (13). Whereas this latter calculation appears to include costs of the office space used by staff members, the approach taken by Jakicic et al. does not. Whether or not to include such space may depend on factors such as whether the staff member is a full-time employee or comes to the site only to offer the treatment contacts. Other factors, such as where the space is located (within a hospital, in a commercial property or in space provided by a community partner; within or outside the city proper), will also influence the costs of space. As previously reported (4), the ILI in Look AHEAD produced a mean weight loss of 8.6% at year 1 and 6.0% at the end of the intervention (median of 9.6 years). This weight loss was significantly greater than the 0.7% and 3.5% weight loss in DSE at year 1 and the end of intervention, respectively. As noted earlier, the lifestyle intervention had a large number of both short-term and longterm health benefits relative to DSE (4)(5)(6)(7)(8)(9)(10). In addition, the lifestyle intervention was associated with a reduction in medical care costs, with a 10% savings related to hospitalizations and a 7% savings in medication costs. Over 10 years, the ILI led to a mean relative per person cost saving of $5,280 (7). Future work, which will incorporate additional follow-up, will evaluate the overall costeffectiveness of the intervention. This evaluation will weigh the contributions of three components: healthcare cost savings, the costs involved in offering the lifestyle intervention and the health benefits that are associated with the intervention. Our study has some limitations. Estimates of personnel time were based on cross-sectional surveys. The rate that intervention tools (e.g. orlistat) were used by participants may have been increased because they were provided free of charge. Attendance rates in this clinical trial also may have been higher than those typically seen in clinical practice settings. Finally, we have used actual labor costs instead of Bureau of Labor Statistics national wage rates, which may limit generalization. Recently, there have been a number of efforts to reduce the cost of lifestyle interventions, as provided in the DPP, by offering programmes at community centres, using lay community intervention staff (22), or by using digital media to deliver programmes (23). A meta-analysis of this literature showed that there were no differences in weight loss achieved by trained professionals vs. lay educators (24). Similarly, these authors noted that there were no data on the cost-effectiveness of 8, 12 or 16 treatment sessions, but that the more sessions participants attended, the greater their weight loss. Finally, the effectiveness, as well as the cost-effectiveness, of lifestyle interventions used in combination with contemporary technology (e.g. activity monitors) remains unclear (25). In conclusion, the cost of the ILI used in Look AHEAD was $2,864.60 per participant in the first year (or $329 per kg weight loss) but decreased over the subsequent 8 years of the intervention. These costs were driven primarily by personnel costs. Look AHEAD investigators currently are examining whether the costs of ILI in overweight and obese participants with type 2 diabetes are justified by a reduction in a variety of health problems and the associated medical care costs.
2018-04-03T02:15:37.613Z
2017-02-24T00:00:00.000
{ "year": 2017, "sha1": "f82da4b34173a112a1cadf291e0983776a047f0f", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/osp4.92", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f82da4b34173a112a1cadf291e0983776a047f0f", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252746338
pes2o/s2orc
v3-fos-license
GCET Lives : One of the consequences of the social isolation caused by the COVID19 pandemic caused was that it discouraged many students from continuing with studies. This fact was identified by the members of the research group, and it is a situation faced by many people. In view of this scenario, GCET came up with the idea of creating a Lives project, to encourage learning during the pandemic. This study aims to encourage students and teachers, and society in general, to continue teaching, performing research and complementing studies during the period of social isolation, through the organization of weekly Lives – live videos transmitted on the Instagram platform. The first step in the project was to conduct bibliographic and documentary research, seeking the necessary background for the construction and elaboration of the themes to be addressed. Weekly meetings were held, via videoconferencing, to plan the Lives and facilitate communication and preparation of the advance planning to achieve the project objective. The email addresses and Whatsapp accounts of the research group were also used for correspondence, to answer questions, receive suggestions, and conduct surveys on the issues to be addressed during the Lives. Reports on the Lives were prepared and made available on the YouTube, Podcast and Instagram accounts of GCET. This support was essential for the success of the Lives, and for the satisfaction of the public. The project was carried according to the following steps: identification of the problem, theoretical input, weekly planning, survey, distribution of themes and technical support. The project aired a total of sixteen Lives in the period April 17 to July 9, analyzing the ability to meet specific needs and social inclusion. The Lives were held once a week addressing different themes, according to the theoretical findings and the demands of the target audience, resulting in the construction of knowledge and providing new skills to society, encouraging and motivating the practice of scientific knowledge. The experience of the GCET Lives project impacted not only the external participants but also its creators, as it became a motivational tool for scientific knowledge during social isolation. The project demonstrated that technology can be used for different purposes, enabling a new restructuring of scientific knowledge. INTRODUCTION With the emergence of the Covid-19 pandemic, which caused social isolation and made it impossible to carry out on-site classes, the Tourism Culture and Studies Group (GCET) of the Federal University of Paraíba (UFPB), comprised of graduation and post-graduation volunteer members, decided to launch the GCET Lives project, in order to motivate students, teachers, researchers, and society in general. The main objective of the project was to create and make available a schedule of Lives on the Instagram platform, to stimulate teaching and research and complement studies during the period of social isolation. The Lives were aired on Thursdays at 9:00 p.m. on Instagram @grupogcet. For a better understanding of the GCET Lives project, its problems and relevance and the methodological procedures that was used as the foundation are presented below. After the methodology, the results of the project are presented, giving the location, research and the target audience, offering as much information as possible so that the reader can understand this stage of the project. Finally, some practical implications and final considerations are presented, highlighting the importance of the project and its impacts and originality. Problem and Relevance The atypical scenario caused by the Covid-19 pandemic, aimed at controlling the activity of the virus, led to social isolation in various countries around the world. This isolation has been a huge challenge for people's mental health, making them more vulnerable to stress and anxiety and affecting their productivity. Many people are currently facing a lack of encouragement to continue with their learning processes, although everybody reacts differently to this kind of limitation. Consequently, GCET envisioned the creation of the Lives project to encourage the remote continuation of studies and research. The project was aimed at positively impacting its viewers by stimulating learning through the themes presented. The Lives were the means used to achieve the proposed objective. The themes of the GCET Lives covered issues related to tourism, the hotel trade, accessibility, tourism marketing and sustainability. Weekly articles offered information about the Lives (APPENDIX B) and were published on websites, blogs and social networks of the Federal University of Paraíba, which supports the project. The question that guides this research was how to create and make available a program of Lives that will stimulate teaching, research and complementation of studies during the period of social isolation resulting from the Covid-19 pandemic. METHODOLOGICAL ASPECTS Initially, bibliographic and documentary research was carried out on national and international scientific books and articles, through search platforms that address issues related to tourism, hospitality, accessibility, tourism marketing and sustainability, which are the focus of discussion in the Lives. According to Malhotra (2011), as in the case of scientific articles obtained from reliable search platforms, secondary data offers advantages over primary data. To develop the weekly Lives, bibliographic and documentary research was carried out on the themes to be addressed; based on this framework, scripts were created addressing the program content. During this process, bibliographic research supported the construction of scientific knowledge through a theoretical deepening on a specific theme (RODGERS; KNAFL, 2000), since new theories and possibilities for discussion and critical analyses emerge out of this process (VOSGERAU; ROMANOWS-KI , 2014), and consequently, advances are made in relation to the research (BOTELHO; CUNHA; MACEDO, 2011). For the planning, development and accomplishment of the GCET Lives project, it was necessary to use videoconferencing platforms during the weekly planning meetings, due to restrictions on movement that prevented face-to-face meetings. In this context, Barbosa, Viegas and Batista (2020) highlight that we are increasingly witnessing technological advances to solve our problems or achieve our desires. Thus, technology is on of the few allies left to us in the face of the social isolation that has been thrust upon the world. The authors also report the importance of continuing the teaching and the students' learning process, despite the face of difficulties of pedagogical adaptation. Even knowing that difficulties could arise along the way, GCET members chose to continue their activities online. Therefore, besides the videoconferencing, other means of communication were used to facilitate the development of the ongoing activities. Thus, Whatsapp and Messenger groups were created, since advance planning was essential to achieve the project objectives. Figure 1 below shows the stages in the implementation of the project. The identification of the problem emerged out of a troubled period that mankind is currently going through. The Covid-19 pandemic has led to the cancellation of many on-site classes, making personal interaction and mutual learning difficult or impossible. So GCET decided to break new ground and use the available technological resources, as its on-site activities were also canceled. Initially, the necessary theoretical support was sought under different theoretical perspectives capable of contributing to the elaboration of tools to enable the continuity of academic learning for students and teachers. Based on this theoretical foundation, GCET members performed the weekly planning, gathering information on the themes to be addressed in the Lives, always on matters related to tourism and hospitality. The choice of these themes was also based on the research lines of GCET, i.e.. management and marketing, sustainability, accessibility and senior citizens, as well as covering topics related to the difficulties and experiences faced by students and teachers in the academic environment. Subsequently, the themes were discussed with group members and external participants according to the level of affinity between the theme and students and teachers, providing more concise conversation and greater theoretical deepening. The invitation was extended to individuals not related to the group or to the university, in order to bring different views on the themes addressed. These other participants were selected based on their personal and professional experiences in the areas to be discussed in the Lives. The group members also gave technical support to each Live, controlling the time and the script and schedule, organizing questions to be asked, and assisting the invited guests. Interactivity with the public was encouraged during the Lives, and both previously submitted questions and those asked "live" were answered. Since Instagram limits the duration of a Live to 60 minutes, the answers to questions not answered during the session were made available on the GCET social networks. The group's email address and Whatsapp account were also given, to answer any further questions, receive suggestions and carry out surveys about the next topic to be presented. Lives reports were made available on YouTube and Podcast, and summaries of topics were made available on Instagram. This support was essential for the success of the Lives and the satisfaction of the public. Evaluations made it possible to obtain positive feedback for the following Lives. Activities were structured by the project coordination, with the main role of designing didactic structures that would allow both the project participants and the target audience to develop an autonomous learning process. RESULTS The GCET Lives project, which started on April 17 and ended on July 9, 2020, was intended to stimulate the students' interest during the period of social isolation and contribute to the creation of new educational knowledge. Due to the success achieved, in August 2020 the project GCET LIVES II was started, keeping the same dynamic. In order to implement this project, members of the Tourism Culture and Studies Group -GCET, coordinated by a teacher from the Tourism course of the Federal University of Paraíba -UFPB, were supported by the Post Graduate Continuous Flow (Fluxo Contínuo de Extensão) -FLUEX of UFPB. Thus, the partnership between the University Post Graduate Units and the research group enabled students to develop relevant academic works related to the problems experienced. As a result of the project, 16 Lives were aired (APPENDIX A), 5 of which focused on research, 4 on themes related to the importance of maintaining studies even during isolation, 5 on tourism, and 2 in the field of Law. The audience that watched and interacted with the lives was mostly composed of students and teachers from the fields of tourism and hospitality, but other professionals also participated, from areas such as Law, environment, administration and marketing, demonstrating that capacity of the project to interact with other areas. The experiences gained during the project generated interdisciplinarity and established communication with other disciplines, with the aim of unifying concepts that could contribute to the resolution of shared problems or studies, according to Berti (2007) and Lenior (2008). In this sense, interdisciplinarity was found through Lives debaters, professionals from the tourism, hospitality, law, business administration and marketing areas, including undergraduates, postgraduates and those with master's and doctorate degrees. Interaction among students during the Lives was seen in the questions asked before and during the live broadcasts, which stimulated debate and participation. The project is characterized according to values that can be found in the experience of social technology. One of the factors involved in the construction and development of this experience is its ability to meet specific needs, since the project meets a demand from students and teachers who are socially isolated. Another is the direct participation of the population and in this process, the project managed to capture the attention not only of students and teachers, but also that of the population in general. Reducing the sense of social inclusion was another factor. Everyone could participate in the Lives, and Applied Tourism ISSN: 2448-3524 https://siaiap32.univali.br/seer/index.php/ijth/index curring in many Brazilian states where, according to Soares and Pinto (2020), the Covid 19 pandemic led to increasing numbers of abandoned pets by Brazilian families who feared contamination, as they believed animals to be one of the main contaminating agents, although this has not been confirmed. The eighth Live discussed the professional activity performed on a cruise ship. The main objective of this approach was to enable viewers to learn more about the jobs performed on cruise ships, and to discuss some of the challenges and benefits of such work. The ninth Live was conceived and developed, once again in response to requests from Instagram followers, addressing in more depth the aspects of the research group and its importance for the academic and professional life. The tenth Live, which was transmitted on June 4, covered the theme of postgraduate studies, aiming to contextualize challenges and achievements in this process experienced and desired by many individuals. The eleventh Live discussed the Film commissions and their benefits for tourism; the main aspects of this theme were analyzed through literature, to provide viewers with a better understanding of the subject. Ordinary wines and harmonization was the theme of the twelfth Live of the project, held on June 12, aimed at providing knowledge about an important discipline of tourism and hospitality courses, Food & Beverage, in a more dynamic and casual way. The thirteenth Live focused on scientific research through a report of experience of the Institutional Volunteer Research Program (Programa Institucional de Voluntariado em Iniciação Científica -PIVIC, addressing the importance of participation in the program and its benefits for students' academic and professional development. Multidisciplinarity in research groups and their challenges and opportunities was the theme of the fourteenth Live, which informed viewers about the way the lines of research in the GCET group are worked on, and how each line has been addressing its themes. GCET's fifteenth Live was conceived and elaborated with the aim of giving viewers a perspective of the new tourism trends after COVID-19, with relevant and current issues on the prospects of activities in the tourism market getting back to normal. The sixteenth Live completed the first stage of the Lives cycle, focusing on movie-induced tourism and its applicability. It aimed to clarify for viewers the importance of this segment for the tourism sector and the good results it has been bringing for the sector over the years, focusing on a differentiated and little studied segment in the tourism field. It should be noted that some of the Lives did not take place on Thursdays due to impediments on the part of the participants. Instead, they often took place on Wednesdays and Saturdays, always with prior communication and widespread dissemination to the target review them afterwards by accessing the YouTube platform at any time. Collaboration in the learning process was another factor, as the project explored a new way of learning, through the use of digital resources. Thus, the choice of this type of technology is justified by the fact that it can reach a larger number of people, since the Instagram social network is often used by students and teaching staff of educational institutions, and also because the application software enables people to interact in real time. The project considered distance learning as a model for development and design, due its low cost, requiring only an internet connection for transmission. It can therefore be easily replicated by other research groups. It is worth mentioning that the project not only contributed to the viewer audience; those involved in its organization also had the opportunity to participate in interdisciplinary teaching experiences with digital tools, as well as interacting with the public in a more dynamic and creative way. The first Live of the project was held on April 17th. This first contact of the research group with Instagram audiences provided positive feedback. The theme of this live was to encourage students, teachers, and society in general to continue studying and seeking personal growth through the creation of a daily routine. This theme was considered very appropriate, due to the suspension of classes. The theme of the second Live of the project related to the research groups and their importance in academic life. It addressed the theoretical findings and surveys carried out in the first stage of the project, explaining the importance and benefits of joining a research group. The third Live clarified and dispel doubts about the MBA -Master in Business Administration, and its importance for the academic and professional life, once again aiming to encourage the continuity of studies. The theme of the fourth Live was also considered opportune and of great value, not only for students but also for teachers in the process of adapting to the new dynamic adopted by most Brazilian educational institutions. Thus, aspects of the challenges of distance learning during social isolation times were addressed. The fifth Live was held on May 9 at 9.00 p.m. and continued the theme of the MBA and its importance in the academic and professional life, as this was a subject of great interest to the audience. Due to requests from viewers via WhatsApp, the sixth Live of the project aimed to simplify the methodological design for TCCs and dissertations. The seventh Live addressed the topic of the Animal Law in times of coronavirus. This theme was chosen due to requests from GCET followers on Instagram and the theoretical basis carried out at the beginning of the project, addressing the interdisciplinarity exercised by tourism. This theme was also linked to a situation oc-Applied Tourism ISSN: 2448-3524 https://siaiap32.univali.br/seer/index.php/ijth/index stagram for the Lives project fulfilled this role, as it is already part of most people's routine, contributing to greater participation and involvement. It was therefore possible to present theoretical and practical content with the engagement of the target audience. The insertion of new technologies contributed not only to the encouraging students, but also to the educators' personal and professional growth. Since science involves a constant search for production and socialization of its results, other forms of publication have been disseminated, with the important role of spreading knowledge in diverse cultural spheres, which also important for the process of knowledge production (MEIRA, 2016). Therefore, Instagram is an effective tool to facilitate learning, as well as contributing to scientific dissemination and providing networked learning. The interactivity experienced in the Instagram Lives provides an opening for greater communication, exchanges and participation (MEIRA, 2016). Therefore, the project demonstrated that learning is possible even amidst adversities that can occur in the world, with the use of convenient resources available to mankind. Scientific studies have long demonstrated the importance of technological advances for human beings, but few studies have addressed these advances in the formation of new academic teaching mechanisms. Even those who do address this theme start from a systematic bias, restricted to new possibilities that can be contemplated, in a world where modernity is talked about and thought about in a retrograde way. Thus, the GCET Lives project demonstrated that it is possible to maintain studies and build a knowledge base even in the face of the difficulties of social isolation due to COVID-19. Finally, the project was able to encourage the continuity of group research even during the pandemic. The project fulfilled its teaching role in that each Live brought a topic to be exposed and debated, providing not only rapid dissemination of information, reaching people everywhere, regardless of their physical location, but also providing a "visual" encounter and the opportunity for debates that often take place in classrooms. audience. It can be said that this project was able to boost teaching through the proposal of new methodologies, with the use of digital tools for transmitting and complementing the content, allowing digital media to be incorporated into the disciplines (MORAN, 2013), and providing an important support tool for the teaching and learning process. In this case, the Lives on Instagram provided a means of obtaining information, but also for producing knowledge and dispelling doubts, generating an excellent opportunity for study and research in college education. Universities must keep up with the demands of a society connected by social media, which is currently part of its routine, and these media must not be excluded from the teaching process. FINAL CONSIDERATIONS The GCET Lives project comes at a time when the world is forced to suddenly stop its activities and remain in social isolation due to the COVID19 pandemic, preventing the continuity of people's routines. Thus, in the midst of the pandemic, the project aimed to encourage teachers, students and the population in general to expand their field of study through new technologies and give GCET members the opportunity to overcome their limitations in relation to new technologies, improve their oratory skills, reflect on discussions inherent to their field of study, and build a reliable information base. The project encouraged other teachers and students from different areas to discuss the topics that had been suspended at universities due to the temporary interruption of classes. The Lives provided not only quick dissemination of information, reaching people everywhere, but also visual encounter, motivating interest in continuing academic studies during the lockdown period, and contributing to the creation of new educational knowledge. This research was characterized as a case study, using the methodology of applied research which, according to Prodanov and Freitas (2013), seeks to solve social problems. It is considered innovative, as it uses technological advances to continue academic learning, thus seeking, through the resource provided by the Instagram platform, to interact with the academic public. Although Instagram had already been used, the innovative factor refers to the fact that this social network is used for academic purposes related directly to teaching, research and extension courses. Instagram was chosen for the Lives as it is a social network with great participation, with an estimated number of 64 million active accounts in Brazil, and able to reach the target audience. According to Boyd (2014), contemporary society seeks innovative and dynamic ways of transmitting knowledge and the choice of In- Multi-disciplinarity in research groups -challenges and opportunities in practice 06/25 09:00PM The resumption of tourism -perspectives and trends 07/02 09:00PM Movie-induced tourism between understanding and applicability 07/09 09:00PM Appendix B -BANNERS OF GCET LIVES
2022-10-07T15:04:21.035Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "57948ad3af7158f5b5db731fa876ead3642868b3", "oa_license": "CCBYSA", "oa_url": "https://periodicos.univali.br/index.php/ijth/article/download/18623/10684", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "14ecc8a2e0195f5d29a46b403ce83db3f4be047d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
37563750
pes2o/s2orc
v3-fos-license
Chiral Transition and the Quark-Gluon Plasma We discuss a few recent results for the equation of state of strongly interacting matter and the dynamics of chiral phase transition. First, we consider the cases of very high densities, where weak-coupling approaches may in principle give reasonable results, and very low densities, where we use the framework of heavy-baryon chiral perturbation theory. We also speculate on the nature of the chiral transition and present possible astrophysical implications. Next, we discuss the kinetics of phase conversion, through the nucleation of bubbles and spinodal decomposition, after a chiral transition within an effective field theory approach to low-energy QCD. We study possible effects resulting from the finite size of the expanding system for both the initial and the late-stage growth of domains, as well as those effects due to inhomogeneities in the chiral field which act as a background for the fermionic motion. We also argue how dissipation effects might dramatically modify the picture of explosive hadronization. I. INTRODUCTION During the last decade, the investigation of strongly interacting matter under extreme conditions of temperature and density has attracted an increasing interest.In particular, the new data that started to emerge from the high-energy heavy ion collisions at RHIC-BNL, together with an impressive progress achieved by finite-temperature lattice simulations of Quantum Chromodynamics (QCD), provide some guidance and several challenges for theorists. In fact, lattice QCD results [1] suggest that strongly interacting matter at sufficiently high temperature undergoes a phase transition (or a crossover) to a deconfined quark-gluon plasma (QGP).Despite the difficulties in identifying clear signatures of a phase transition in ultrarelativistic heavy ion collisions, recent data from the experiments clearly point to the observation of a new state of matter [2]. All this drama takes place in the region of nonzero temperature and very small densities of the phase diagram of QCD.On the other hand, precise astrophysical data appear as a new channel to probe strongly interacting matter at very large densities.Compact objects, such as neutron stars, whose interior might be dense enough to accomodate deconfined quark matter, may impose strong constraints on the equation of state for QCD at high densities and low temperatures [3].Moreover, several preliminary results from the lattice at nonzero values for the quark chemical potential, µ, start to appear [1]. In this paper we briefly review a few recent results for the equation of state of strongly interacting matter and for the dynamics of the chiral phase transition.First, we consider the case of cold and dense QCD.In the extreme cases of very high densities, where weak-coupling approaches may in principle give reasonable results, and very low densities, where we use the framework of heavy-baryon chiral perturbation theory, we can draw a reasonably clear picture of the equation of state.At the end, we discuss results that point to important corrections brought about by a nonzero strange quark mass.We also speculate on the nature of the chiral transition and present possible astrophysical implications.Next, we discuss the kinetics of phase conversion, through the nucleation of bubbles and spinodal decomposition, after a first-order chiral transition within an effective field theory approach to low-energy QCD.We study possible effects resulting from the finite size of the expanding system for both the initial and the late-stage growth of domains, as well as those effects due to inhomogeneities in the chiral field which act as a background for the fermionic motion.We also consider dissipation effects which might dramatically modify the picture of explosive hadronization. A. High-density limit Let us first consider the case of cold and very dense strongly interacting matter.For high enough values of the quark chemical potential, there should be a quark phase due to asymptotic freedom.In this regime of densities, one is in principle allowed to use perturbative QCD techniques [4][5][6][7][8], which may be enriched by resummation methods and quasiparticle model descriptions [8][9][10][11], to evaluate the thermodynamic potential of a plasma of massless quarks and gluons (see Figure 1, for the perturbative result).Different approaches seem to agree reasonably well for µ >> 1 GeV, and point in the same direction even for µ ∼ 1 GeV and smaller, where we are clearly pushing perturbative QCD far below its region of applicability.However, at some point between µ ≈ 313 MeV, and µ ≈ 1 GeV, one has to match the equation of state for quark matter onto that for hadrons. As we argued in [6,7], depending on the nature of the chiral transition there might be important consequences for the phenomenology of compact stars.For instance, in the case of a strong first-order chiral transition, a new stable branch may appear in the mass-radius diagram for hybrid neutron stars, representing a new class of compact stars (see Figure 2).On the other hand, for a smooth transition, or a crossover, one finds only the usual branch, generally associated with pulsar data.This is an important issue for the ongoing debate on the radius measurement of the isolated neutron star candidate RX J1856.5-3754, which might be a quark star [14].For details, see Refs.[6] and [7]. B. Low-density limit For pure neutron (asymmetric) matter, which will play the role of hadrons at "low" density here, Akmal, Pandharipande and Ravenhall [15] have found that to a very good approximation we have, up to ∼ 2n 0 , the following energy per baryon: which is approximately linear in the baryon density, n.Here, n 0 ∼ 0.16 baryons/fm 3 is the saturation density for nuclear matter.From this relation, we can extract the pressure: From these results, we see that: (i) even at "low" densities we have a highly nonideal Fermi liquid, since free fermions would give ε/n − m ∼ n 2/3 ; (ii) energies are very small on hadronic scales (if we take f π , as a "natural scale"); (iii) energies are small not only for nuclear matter (nonzero binding energy), but even for pure neutron matter (unbound).Then, this might be a generic property of baryons interacting with pions, etc., and not due to any special tuning. In order to investigate the pion-nucleon interaction at nonzero, but low, density, we considered the following chiral Lagrangian [16] (see also [17] and [18]) where L 0 π is the free Lagrangian for the pions, π, ψ i represent nucleons (in n s species), and µ is the chemical potential for the nucleons.From the Goldberger-Treiman relation we have: and, from the Particle Data Group, m N = 939 MeV, m π = 135 MeV, f π = 130 MeV, and g πNN = 13.1 The goal is to compute the nucleon and the pion one-loop self-energy corrections due to the medium up to lowest order in the nucleon density (only nonzero µ contributions) by using the technique of heavy-baryon chiral perturbation theory.Therefore, we adopt a non-relativistic approximation: Moreover, we assume that external legs are near the massshell (p 0 + µ) 2 − ω 2 p ≈ 0, that pions are dilute, that we have small values of Fermi momentum, and consider only leading order in (p f /m N ), (p f / f π ), etc. One-loop calculations within this framework provide the following result for f π : as the Fermi momentum, p f , increases -restoring chiral symmetry -f π must go down.Indeed, from chiral perturbation theory, we obtain (a ≡ (g 2 A /48π 2 )): Then, from Brown-Rho scaling [19] one would expect that all quantities should scale in a uniform fashion.However, we obtain the following result for the nucleon mass [16]: We can draw some tentative conclusions about the hadronic pressure.To leading order, the non-ideal terms in the pressure are proportional to Σ 0 , defined through the nucleon selfenergy on mass shell Σ(p 0 ms , p) ∼ −[(ip 0 ms + µ)γ 0 + iγ i p i + m]Σ 0 (p), which is very small [18].At higher order, even if corrections to the pion propagator are large, their effect on the nucleon propagator, and the free energy, can still be small, as a large correction to a small number.Thus the possibility of a hadronic phase with a small pressure, required for a new class of quark stars, remains viable. In order to be able to make clear predictions for the phenomenology of compact stars, and to have a better understanding of this region of the QCD phase diagram, one has to find a way to describe the intermediate regime of densities in the equation of state, where perturbative calculations do not work.The study of effective field theory models might bring some insight to this problem. C. Role of nonzero quark mass For almost twenty years the effects of a nonzero strange quark mass on the equation of state of cold and dense QCD were considered to be negligible (less than 5%), thereby yielding only minor corrections to the mass-radius diagram of compact stars.Even the most recent QCD approaches [6,[8][9][10][11] generally neglected quark masses and the presence of a color superconducting gap as compared to the typical scale for the chemical potential in the interior of compact stars, ∼ 400 MeV and higher.However, it was recently argued that both effects should matter in the lower-density sector of the equation of state [13].In fact, although quarks are essentially massless in the core of quark stars, the mass of the strange quark runs up, and becomes comparable to the typical scale for the chemical potential, as one approaches the surface of the star. By computing the thermodynamic potential to first order in α s , and including the effects of the renormalization group running of the coupling and strange quark mass, we showed that the corrections can be very large (up to 25%), and dramatically affect the structure of compact stars, as can be seen from the modifications of the mass-radius diagram [20]. The effects of the finite strange quark mass on the total pressure and energy density for electrically neutral quark matter (plus electrons) are given in Fig. 3.There we show results for 3 light flavors and running coupling, corresponding to the case considered in [6], and for 2 light flavors and one massive flavor, with both running coupling and strange quark mass (which reaches m s ∼ 137 MeV at µ = 500 MeV).As can be seen from this Figure, there is a sizable difference between zero and finite strange quark mass pressure and energy density for the values of the chemical potential in the region that is relevant for the physics of compact stars.As has been noticed by several authors [6,10,13], the resulting equation of state, ε = ε(P), can be approximated by a non-ideal bag model form Here a ∼ 3 is a dimensionless coefficient while B e f f is the effective bag constant of the vacuum.Concentrating on the lowdensity part of the equation of state, one finds for massless strange quarks the parameters B 117 MeV and a 2.81 while the inclusion of the running mass raises these values to B 1/4 e f f 137 MeV and a 3.17 (all values having been obtained by including a running α s in the equations of state).Therefore, we expect important consequences in the massradius relation of quark stars due to the inclusion of a finite mass for the strange quark. III. FIRST-ORDER CHIRAL TRANSITION AND DYNAMICS OF HADRONIZATION A. Effective Model To model the mechanism of chiral symmetry breaking present in QCD, and to study the dynamics of phase conversion after a temperature-driven chiral transition, one can resort to low-energy effective models.In particular, to study the mechanisms of bubble nucleation and spinodal decomposition in a hot expanding plasma, it is common to adopt the linear σ-model coupled to quarks, where the latter comprise the hydrodynamic degrees of freedom of the system [21][22][23][24][25][26].The gas of quarks provides a thermal bath in which the longwavelength modes of the chiral field evolve, and the latter plays the role of an order parameter in a Landau-Ginzburg approach to the description of the chiral phase transition.The gas of quarks and anti-quarks is usually treated as a heat bath for the chiral field, with temperature T .The standard procedure is then integrating over the fermionic degrees of freedom, using a classical approximation for the chiral field, to obtain a formal expression for the thermodynamic potential of an infinite system. Let us consider a chiral field φ = (σ, π), where σ is a scalar field and π i are pseudoscalar fields playing the role of the pions, coupled to two flavors of quarks according to the Lagrangian Here q is the constituent-quark field q = (u, d) and µ q = µ/3 is the quark chemical potential.The interaction between the quarks and the chiral field is given by M(φ) = g (σ + iγ 5 τ • π) , and V (φ) = λ 2 4 σ 2 + π 2 − v 2 2 − h q σ is the self-interaction potential for φ. The parameters above are chosen such that chiral SU L (2) ⊗ SU R (2) symmetry is spontaneously broken in the vacuum.The vacuum expectation values of the condensates are σ = f π and π = 0, where f π = 93 MeV is the pion decay constant.The explicit symmetry breaking term is due to the finite current-quark masses and is determined by the PCAC relation, giving h q = f π m 2 π , where m π = 138 MeV is the pion mass.This yields The value of λ 2 = 20 leads to a σ-mass, m 2 σ = 2λ 2 f 2 π + m 2 π , equal to 600 MeV.For g > 0, the finite-temperature one-loop effective potential also includes a contribution from the quark fermionic determinant.In what follows, we treat the gas of quarks as a heat bath for the chiral field, with temperature T and baryon-chemical potential µ.Then, one can integrate over the fermionic degrees of freedom, obtaining an effective theory for the chiral field φ.Using a classical approximation for the chiral field, one obtains the thermodynamic potential where V is the volume of the system and G E is the fermionic Euclidean propagator.From the thermodynamic potential one can obtain all the thermodynamic quantities of interest. B. Inhomogeneities in the Chiral Field To compute correlation functions and thermodynamic quantities, one has to evaluate the fermionic determinant within some approximation scheme.In 1D systems one can usually resort to exact analytical methods [27].In practice, however, the determinant is usually calculated to one-loop order assuming a homogeneous and static background chiral field.Nevertheless, for a system that is in the process of phase conversion after a chiral transition, one expects inhomogeneities in the chiral field to play a role in driving the system to the true ground state. We propose an approximation procedure to evaluate the finite-temperature fermionic determinant in the presence of a chiral background field, which systematically incorporates effects from inhomogeneities in the chiral field through a derivative expansion.The method is valid for the case in which the chiral field varies smoothly, and allows one to extract information from its long-wavelength behavior, incorporating corrections order by order in the derivatives of the field. The Euler-Lagrange equation for static chiral field configurations contains a term which represents the fermionic density ρ( where | x 0 is a position eigenstate with eigenvalue x 0 , and ν q = 12 is the color-spin-isospin degeneracy factor.In momentum representation: We can transfer the x 0 dependence to M( x) through a unitary transformation, expand M( x + x 0 ) around x 0 , and use xi = −i∇ k i to write , (13) where x 0 is a c-number, not an operator, and ∆M( x, If we focus on the long-wavelength properties of the chiral field and assume that the static background, M( x), varies smoothly and fermions transfer a small ammount of momentum to the chiral field, we can expand the expression above in a power series: which provides a systematic procedure to incorporate corrections brought about by inhomogeneities in the chiral field to the quark density, so that we can calculate ρ( x) = ρ 0 ( x) + ρ 1 ( x) + ρ 2 ( x) + ••• order by order in powers of the derivative of the background, M( x).The leading-order term in this gradient expansion for ρ( x) can be easily calculated and yields the well-known mean field result for ρ.The net effect of this leading term is to correct the potential for the chiral field to V e f f = V (φ) +V q (φ), where where . This sort of effective potential is commonly used as the thermodynamic potential in a phenomenological description of the chiral transition for an expanding quark-gluon plasma created in a high-energy heavy-ion collision [22][23][24][25][26].However, the presence of a nontrivial background field configuration, e.g. a bubble, can in principle dramatically modify the Dirac spectrum, hence the determinant [27,28].Results for the correction of the Laplacian term will be presented elsewhere [29]. C. Finite-Size Effects on Nucleation of Hadrons In the process of phase conversion through bubble nucleation in a QGP of finite size, the set of all supercritical bubbles integrated over time will eventually drive the entire system to its true vacuum.The scales that determine the importance of finite-size effects are the typical linear size of the system, the radius of the critical bubble and the correlation length.For definiteness, let us assume our system is described by a coarse-grained Landau-Ginzburg potential, U(φ, T ), whose coefficients depend on the temperature.For the case to be considered, the scalar order parameter, φ, is not a conserved quantity, and its evolution is given by the time-dependent Landau-Ginzburg equation where γ is the response coefficient which defines a time scale for the system.The equation above is a standard reactiondiffusion equation, and describes the approach to equilibrium.If U(φ, T ) is such that it allows for the existence of bubble solutions (taken to be spherical for simplicity), then supercritical (subcritical) bubbles expand (shrink), in the thin-wall limit, with the following velocity: where R c = (d − 1)σ/∆F and ∆F is the difference in free energy between the two phases.The equation above relates the velocity of a domain wall to the local curvature.The response coefficient, γ, can be related to some characteristic collision time.One can measure the importance of finite-size effects for the case of heavy-ion collisions by comparing, for instance, the asymptotic growth velocity (R >> R c ) for nucleated hadronic bubbles to the expansion velocity of the plasma. In the Bjorken picture, the typical length scale of the expanding system is L(T ) ≈ (v z t c )(T c /T ) 3 = L 0 (T c /T ) 3 , where v z is the collective fluid velocity and L 0 ≡ L(T c ) is the initial linear scale of the system for the nucleation process which starts at T ≤ T c .The relation between time and temperature provided by the cooling law that emerges from the Bjorken picture suggests the comparison between the following "velocities": the asymptotic bubble growth "velocity", and the plasma expansion "velocity" v L ≡ (dL/dT ) = −(3L 0 /T c )(T c /T ) 4 .The quantity b is a number of order one to first approximation, and comes about in the estimate of the phenomenological response coefficient γ(T ) ≈ b/2T .Using the numerical values adopted previously and σ/T 3 c ∼ 0.1, we obtain [30] One thus observes that the bubble growth velocity becomes larger than the expansion velocity for a supercooling of order θ ≈ v z /20 ≤ 5%.A simple estimate points to a critical radius larger than 1 fm at such values of supercooling [24].Therefore, finite-size effects appear to be an important ingredient in the phase conversion process right from the start in the case of high-energy heavy-ion collisions [30]. D. Effects from dissipation Recently, we have considered the effects of dissipation in the scenario of explosive spinodal decomposition for hadron production [31][32][33][34] during the QCD transition after a highenergy heavy ion collision in the simplest fashion [35].Using a phenomenological Langevin description for the time evolution of the order parameter in a chiral effective model [24], inspired by microscopic nonequilibrium field theory results, we performed real-time lattice simulations for the behavior of the inhomogeneous chiral fields.It was shown that the effects of dissipation could be dramatic in spite of very conservative assumptions: even if the system quickly reaches this unstable region there is still no guarantee that it will explode. The framework for the dynamics was assumed to be given by the following Langevin equation where φ is a real scalar field, V e f f (φ) is a Landau-Ginzburg effective potential and Γ, which can be seen as a response coefficient that defines time scales for the system and encodes the intensity of dissipation, is usually taken to be a function of temperature only, Γ = Γ(T ).The function ξ( x,t) represents a stochastic (noise) force, assumed Gaussian and white, so that ξ( x,t) = 0 and ξ( x,t)ξ( x ,t ) = 2 ΓT δ( x − x )δ(t −t ), according to the fluctuation-dissipation theorem.Results for the average value of the chiral field, φ, in units of its vacuum value, φ vac , as a function of time for Γ/T = 0, 2, 4 are shown in Fig. 4, where one can see the large delay introduced by dissipation effects. IV. FINAL REMARKS In the discussion above, we reviewed a few aspects of the physics of the chiral transition and the quark-gluon plasma. Within the issues that have been considered, two points clearly stand out as important challenges for field theorists.The first one is related to the behavior of the equation of state at finite density and zero temperature close to the critical region.There, neither perturbative QCD, improved or not, nor lowdensity chiral perturbation theory are of great help.Since reliable lattice results are still lacking in this sector of the phase diagram, one could possibly try to incorporate nonperturbative information into a quasiparticle description of the thermodynamics [36] to achieve a more realistic picture.Because this is the most relevant region of the equation of state for the physics of compact stars, astrophysical constraints will certainly be very welcome.The second is a proper understanding of the dynamics of the phase transition and thermalization.Besides the question of the nature of the phase transition as one varies temperature and chemical potential, which still belongs to an equilibrium analysis, one must also provide a satisfactory nonequilibrium description of the phase conversion process.In the case of heavy ion collisions, where relevant scales differ from the cosmological analog by several orders of magnitude, one has to consider in detail a combination of effects coming from dissipation, noise, expansion and finite size of the system, as well as transient non-Markovian corrections [37].All these challenges, together with so many others not even mentioned here, make the study of the phase diagram of QCD a fascinating subject. FIG. 1 : FIG.1: Pressure in units of the free gas pressure as a function of the quark chemical potential from finite-density perturbative QCD.
2017-09-13T11:34:36.702Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "f689f1a0bea0c6add6945acd66ac4764debabb83", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bjp/a/HznYsjStZ3Fg4TVYdvKvzfS/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "425a464a32d9feec8ac193adcdb84a8144e96e7c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56533523
pes2o/s2orc
v3-fos-license
Modelling of road traffic noise Effective implementation of the EC Communication relating to urban mobility is dependent on adopting relevant indicators that will allow comprehensive assessment of the measures undertaken under the urban mobility plan. One of the fundamental parameters for this assessment is the equivalent sound level indicator, measured over the 24-hour period, used to evaluate the annoyance of the road traffic noise. Not only is its value important but also its variability. This paper reports the results of the analysis of traffic noise along a major road, performed using various descriptors. The data recorded continuously throughout the year by the traffic rate and noise monitoring terminal were used to construct the measurement database. Analysis of the results was performed for the period between March and June in 2011 and 2016. The road was renovated between 2013 and 2015. The noise variability assessment was conducted using classical and positional indicators. Measurement uncertainties were evaluated. Introduction The Communication of the European Commission on urban mobility [1] promotes "...cooperation across different policy areas and sectors (transport, land use and spatial planning, environment, economic development, social policy, health, road safety, etc.); across different levels of government and administration; as well as with authorities in neighbouring areas -both urban and rural."For these guidelines to be effectively and broadly deployed, it is necessary to adopt the solutions available within the so-called urban intelligent transport system regulations, concerning urban access, road safety issues, citizen and stakeholder engagement as well as fostering changes in mobility behaviour.The EC recommends solutions that will encourage inhabitants to use multimodal transport, choose transit over car, shift towards cycling and walking, and combine different modes within one travel chain.As for the urban road traffic management, public transport should be given priority over private transport and the speed of the vehicles should be controlled.Enhancing the traffic flow and discouraging drivers from speeding, thus reducing brake/acceleration cycles, will contribute to increased road safety for all citizens and to reduced air pollution and noise [2].The mobility plans will be effective provided that subsequent stages of the plan implementation are monitored and assessed, with the assessment results adapted to the changing conditions.The measures undertaken under the mobility plans are evaluated using common descriptors.There are fixed indicators, on which the evaluation is based, and supplementary indicators determined when the need arises.The fixed indicators include: -travel speed of cars and collective transport vehicles, -fine particulate matter concentration, PM2.5, PM10, BaP at each assessment point, -the number of inhabitants and dwellings exposed to road traffic noise, assessed using the L DWN and L N descriptors.Average values of some of these descriptors are determined periodically, once in five years, during noise mapping [3], but this is insufficient for the purpose of mobility change analysis.Several cities, including Kielce, have stationary monitoring systems constructed to register noise and traffic rates for 24 hours throughout the year [4].These systems, e.g., Enviro 151, are able to remotely configure and program measurement sessions.Measurement results, available on the internet in real time, illustrate the variations in the structure and rate of traffic in the city and allow the implementation of the digital acoustic mapping project.Kielce has more than ten such stations, both in the centre and on the outskirts of the city.This paper analyses the results from the equivalent sound level measurements recorded by one of these stations in 2011 and 2016. Measuring station The data under analysis were recorded by automatic sound and traffic volume continuous monitoring station located in Popiełuszki Avenue in Kielce.Kielce is a medium-size town located in south-central Poland (the Świętokrzyskie Mountains) within the moderate climate zone.Temperatures within a year range approximately from -5°C in January to +17°C in July.Average monthly precipitation is from 34 mm in October to 96 mm in July.The wind, predominantly from the south and west, reaches an average speed of about 3 m/s over a year.Kielce gets on average 70 days a year of snow on the ground. Popiełuszki Avenue is a road with four lanes of traffic separated by a 3m grass median.One side of the road is comprised of compact residential development, about 200m from the measuring station.The other side is the edge of a woods area.The road is part of the eastern bypass around Kielce and part of the national road No. 73 (Warszawa/Łódź -Kielce -Tarnów -Krosno), which is directly connected with the Trans-European Transport Networks (TEN-T).It is also the major part of the Tarnów-bound thoroughfare functioning as both the transit route and the city street. The results of acoustic measurements performed during the period from 25/03/2011 to 25/06/2011 and from 25/03/2016 to 25/06/2016 were split into three sub-intervals of a 24hour interval: day time, evening time and night time.In the years 2013 -2015, Popiełuszki Avenue was thoroughly renovated.The acoustic measurements were carried out with the SVAN 958A, a four-channel digital vibration analyser and a class 1 sound level meter, operating within the measuring frequency range 0.5 Hz to 20 kHz, depending on a microphone used.The frequency range is 3.5 Hz to 20 kHz when a Microtech Gefell MK250 free-field, prepolarised 1/2" condenser microphone with a sensitivity of 50 mV/Pa, SV 12L preamplifier is used.The temperature range within which the device is operable is from -10 o C to 50 o C. The resolution of the signal RMS detector is 0.1 dB.The measurements were carried out 24 hours a day.The RMS values of the A sound level were registered in the buffer every 1 s and the results were recorded every 1 minute.The data collected formed the basis for equivalent sound level calculation for three time intervals, i.e., from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00.The microphone for the sound pressure measurements was mounted at a distance of 4 m from the edge of the road at a height of 4 m.Traffic volume was measured with a digital radar 245 MHz by Analysis of measurement results The most common noise indicator used to assess annoyance is the equivalent sound level ( ), expressed in (dB), defined as [5]: where: T -represents the overall measurement time, s ( ) -A-weighted sound pressure level, Pa -is the standardized reference sound pressure of 20 µPa -represents the effective sound pressure.Figure 1a compiles the measurement results for the equivalent sound level expressed as dB for T= 86400 s, recorded in 2011 and 2016.Figure 1b The graphs in Fig. 1a indicate that despite the thorough renovation of the road, the values of underwent only slight changes.The range between the minimal values (Saturdays and Sundays) and the values recorded on the working days decreased.The quantile plots and histograms in Fig. 1b confirm that the data do not follow a normal distribution [6]. The non-linear character of logarithmic function changes is a limitation to the use of the parameter.This impedes both the determination of standard deviation or measurement uncertainty and the performance of a comparative analysis.It may also affect the results of statistical tests for normal distribution of the data [5].Therefore, the authors of this paper decided to determine the RMS sound pressure ( ) in the T period from equation ( 1) and use this parameter in further analysis. √ ( ) Standard uncertainty of the parameter, determined in the Type A evaluation, can be calculated from the following relationship: In this study, the authors analysed from (2) expressed in terms of Pa to be able to easily compare the fixed components (the expected value and the median) and variable components (standard deviation, coefficient of variation -classical and positional) of the sound pressure signals recorded.The tests for the variable components contained in the signals were based on the classical and positional measures: deviation ( ), coefficient of variation ( ), quartile deviation , quartile variation coefficient ( ), and quartile coefficient of dispersion ( ).Standard deviation is an absolute measure commonly used for the analysis of sound pressure variable component.Standard deviation is estimated from (4), where n is the amount of data: This parameter defines the average variation in individual sound pressure values from the arithmetic mean.Standard deviation can be related to the expected value of the signal being analysed to obtain the coefficient of variation (COV).The COV is a dimensionless relative measure that can be used to directly compare the variable components in its several realisations.For the sound pressure tested, the COV can be expressed as [7]: The value of the COV is greatly influenced by atypical data, taken into account in the analyses.This influence is less significant when positional measures are used.The measure of dispersion of the variable is the average quartile deviation: Quartile deviation is an absolute measure that defines the average variance of half of the measurement data around the median (after rejecting 25% data with the lowest values and 25% data of the highest values of sound pressure), expressed in terms of pascal.By relating it to the median, the positional coefficient of variation is calculated from (7): The quartile coefficient of dispersion is a relative measure of variance, calculated from (8): The positional coefficient of variation and the quartile coefficient of dispersion are positional measures of the data between the first and third quartiles.Thus, atypical data exert less influence on these coefficients.It has to be noted, however, that the data under analysis represent the measurements collected within 24-hour periods, thereby atypical data cannot be regarded as erroneous measurements. Table 1 summarizes the results of the data analysis for 2011, the RMS sound pressure values calculated according to relationship (2).The calculations show that the pressure increases on working days and decreases by 10 to 20 mPa on Saturdays and Sundays.Note large changes in the standard deviation values that range from 2 mPa to 8 mPa.The COV values range from 3% to 13%, and the highest value is obtained on Mondays.By contrast, the values of and are in the range from 2% to 5%, with the highest values recorded on Sundays.The measurement uncertainty is the lowest on Wednesdays (0.56 mPa), with 1.20 mPa on Sundays which is a noticeably lower value than that recorded for Mondays (2.45 mPa).Analysis of the results in Tables 1 and 2 indicates that the renovation of Popiełuszki Avenue reduced the RMS sound pressure on working days.This value changed slightly on Saturdays, but on Sundays it increased by about 10%.The average value of coefficient increased from about 3% in 2011 to around 4% in 2016 while the COV did not change and remained at the level of about 8%.The lowest measurement uncertainty was 0.6 mPa in 2011 and 1.0 mPa in 2016.Figure 2 Both drawings show some similarity despite the fact that they represent different physical quantities.For pressures up to about 60 mPa, the dependencies are close to linear.For higher pressure values, the dependency is difficult to indicate.Note that both parameters i.e., and COV depend on the standard deviation.The variability interval of standard deviation in 2011 is significantly higher than in 2016, which results in a large spread in Fig. 5.The parameter values depend on the technical condition of the road and motor vehicles, which in 2011 was far worse than in 2016.As the use of roads causes their wear, the values of u A , COV, and will increase over time.This problem requires further analysis. Conclusions The data collected for the equivalent sound level, measured over the 24-hour period and generated by vehicles passing along Popiełuszki Avenue, Kielce between March and June in 2011, before road renovation and in the corresponding period of 2016 after the renovation, showed that the values fluctuated slightly.Analysis of RMS sound pressure values revealed that compared with 2011, the RMS sound pressure decreased on working days in 2016.There was a minor change on Saturdays, while on Sundays the values increased by about 10%.The average value of the coefficient of variation increased from about 3% in 2011 to about 4% in 2016 while the COV remained at the level of about 8%.The lowest measurement uncertainty was 0.6 mPa in 2011 and 1.0 mPa in 2016.The highest RMS sound pressure values were recorded on Fridays.The nature of changes in the sound pressure coefficient varies from day of the week, but is different in each year, but the values of the coefficients range from about 2% to 6%.The sound pressure measurement uncertainties in 2011 and in 2016 were different, but never exceeded 2.5 mPa, and on Saturdays and Sundays differed only slightly.Likewise, the COV values in 2011 and in 2016 were different, but never exceeded 12.5% and on Saturdays and Sundays only minor differences were recorded. Fig. 1 . Fig. 1.Equivalent sound level results from the terminal in Popiełuszki Avenue recorded in 2011 and 2016 for all 24-hour measurement periods; a) equivalent sound level variations and the corresponding: b) quantile plots and histograms with the function of probability density distribution for the standardized data, c) box plots. Fig. 2 . Fig. 2. Box plots of for each 24-hour period of the week: a) in 2011, b) in 2016.The outliers visible in the graphs should be accounted for in the analysis because, in the authors' view, they do not represent measurement errors.Analysis of the measurement results database revealed the outliers on the following days of public holidays in 2011: Easter Monday -Monday, Constitution Day -Tuesday, Corpus Christi -Thursday, Easter Saturday -Saturday, and in 2016; Constitution Day -Tuesday, Corpus Christi -Thursday.The data confirmed a regularity between the box plots in Fig.2aand in Fig.2b, showing that the highest RMS sound pressure values occurred on Fridays.Figure3depicts the values split into days of the week in 2011 and 2016.The nature of these changes in each year is different, but the coefficients are in the range of about 2% to 6%.The character of the changes is very similar to that shown in Fig.3. Figure 3 Fig. 3 . Fig. 3. Relationship between the value of and particular days of the week a) in 2011, b) in 2016. Figure 4 Fig. 4 .Fig. 5 . Figure4shows the changes in measurement uncertainty values of sound pressure u A and COV on particular days of the week.The measurement uncertainty values of sound pressure vary with the year but remain lower than 2.5 mPa, and the differences on Saturdays and Sundays are minor.As for the COV, its values in the two years are different, but remain lower than 12.5%, with minor differences on Saturdays and Sundays. Table 1 . Values of basic statistical measures of determined for all 24h periods of the week in 2011 Table 2 compiles the analysis results for RMS sound pressure data collected in 2016.The calculations show that this value varies from about 60 mPa to 65 mPa on working days and decreases by 10 to 15 mPa on Saturdays and Sundays.Standard deviation values range from 4 mPa to 6 mPa.The COV is in the range 6% to 10%, with the highest value achieved on Tuesday.By contrast, and values range from 3% to 6%, with the highest value recorded on Saturdays.The measurement uncertainty is 1.35 mPa on Saturdays and 1.04 mPa (the lowest) on Fridays. Table 2 . Values of basic statistical measures of determined for all 24h periods of the week in 2016
2018-12-18T09:59:58.865Z
2017-08-24T00:00:00.000
{ "year": 2018, "sha1": "67917cb25853221a999d0b428c3d48b306be845a", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/16/matecconf_mms2018_02001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "67917cb25853221a999d0b428c3d48b306be845a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Geography" ] }
233311292
pes2o/s2orc
v3-fos-license
A Computational Evaluation of Two Models of Retrieval Processes in Sentence Processing in Aphasia Can sentence comprehension impairments in aphasia be explained by difficulties arising from dependency completion processes in parsing? Two distinct models of dependency completion difficulty are investigated, the Lewis and Vasishth (2005) activation-based model and the direct-access model (DA; McElree, 2000). These models’ predictive performance is compared using data from individuals with aphasia (IWAs) and control participants. The data are from a self-paced listening task involving subject and object relative clauses. The relative predictive performance of the models is evaluated using k-fold cross-validation. For both IWAs and controls, the activation-based model furnishes a somewhat better quantitative fit to the data than the DA. Model comparisons using Bayes factors show that, assuming an activation-based model, intermittent deficiencies may be the best explanation for the cause of impairments in IWAs, although slowed syntax and lexical delayed access may also play a role. This is the first computational evaluation of different models of dependency completion using data from impaired and unimpaired individuals. This evaluation develops a systematic approach that can be used to quantitatively compare the predictions of competing models of language processing. Introduction Understanding a sentence requires the comprehender to access lexical representations of words from memory, link these with upcoming words, build up the structure of the sentence, and compute the meaning of the sentence. Consider example (1): (1) The lawyer who came to the office yesterday was looking for some documents. To understand this sentence, the hearer needs to work out who came to the office, when, and why that happened, and who was looking for what. In the sentence processing literature, the process of linking up words that are linguistically related is known as dependency completion. In sentence (1), an example of a dependency is the one between lawyer and was looking. A widely held assumption in sentence processing research (Gibson, 2000;Just & Carpenter, 1992;Lewis, Vasishth, & Van Dyke, 2006) is that dependencies require access to the working memory system in order to work out the relationships between words. Several theories have been developed that spell out the mechanisms that may be involved in the resolution of long-distance dependencies (Gibson, 2000;Just & Carpenter, 1992;McElree, 2000;McElree, Foraker, & Dyer, 2003;Van Dyke & Lewis, 2003). Among these, one class of accounts is referred to as cue-based retrieval theory (Engelmann, Jäger, & Vasishth, 2019;Lewis et al., 2006;Vasishth, Nicenboim, Engelmann, & Burchert, 2019). One core assumption here is that words and phrases are stored in memory as a bundle of feature-value pairs. For example, the word lawyer is represented in memory as an attribute-value matrix (Pollard & Sag, 1994). Some of the relevant feature-value pairs are shown below: innominal À yes animate À yes subject À yes singular À yes 0 B B B @ 1 C C C A Cue-based retrieval theory assumes that dependencies are resolved via a content-addressable search in memory. For example, in sentence (1), to resolve the dependency between lawyer and was looking, the comprehender needs to retrieve a mental representation of the noun lawyer from memory. Upon encountering the words was looking, a retrieval is assumed to be triggered that seeks out a subject noun that has specific features such as [nominal: yes, animate: yes, subject: yes, singular: yes]. We will therefore refer to was looking as the retrieval site, that is, the point at which the retrieval of the co-dependent is triggered, and to the lawyer as the target of the retrieval. The features that are used to carry out the search and retrieval of a given codependent in memory are called retrieval cues. Notice that in this case, it is assumed that was looking is a multi-word unit encoded in memory with a single matrix of feature-value pairs. One reason that sentence comprehension difficulty arises is when multiple items in memory match the retrieval cues set by the trigger at the retrieval site. This is known as similarity-based interference (Van Dyke & Lewis, 2003;Van Dyke & McElree, 2006). To illustrate, we consider the object relative (OR) clause shown in (2b) along with the baseline condition, the subject relative (SR) clause (2a): (2) a. The man who scratched the boy pushed the girl. b. The man who the boy scratched pushed the girl. In (2b), the comprehender needs to work out who scratched whom, and who pushed whom. When the verb scratched is encountered, the retrieval of its corresponding subject (boy) is triggered, using retrieval cues such as [nominal: yes, animate: yes, subject: yes, singular: yes]. At the moment of retrieval, there are two nouns available in memory that match these retrieval cues: man and boy. However, man is not the subject of the relative clause (RC) verb scratched. Following the literature (Jäger, Engelmann, & Vasishth, 2017), we will refer to man as the distractor. Cue-based retrieval theory predicts that processing difficulty increases when both a target and a distractor have features that match the retrieval cues. Processing difficulty arises because these nouns become difficult to distinguish from each other; this phenomenon is called the fan effect in memory research in cognitive psychology (Anderson et al., 2004). In summary, in (2b) processing is assumed to be more difficult at the verb scratched compared to the baseline condition (2a), where the cues used to access the subject of scratched only match the subject man (Lewis & Vasishth, 2005). Grodner and Gibson (2005) present data from a self-paced reading study on ORs and SRs in English that is consistent with this prediction. The increased processing difficulty at the verb scratched is predicted to lead to longer reading times in (2b) versus (2a), and to occasional misretrievals of the incorrect noun from memory. This is the signature effect that is referred to as similarity-based interference (Gordon, Hendrick, Johnson, & Lee, 2006;Jäger et al., 2017;Jäger, Mertzen, Van Dyke, & Vasishth, 2020;Van Dyke & McElree, 2006Vasishth et al., 2019). Two distinct instantiations of cue-based retrieval theory are the Lewis and Vasishth (2005) model of sentence processing (henceforth, LV05) and the direct-access model (henceforth, DA) developed by McElree (2000). The two models share the assumption that retrieval is driven by a cue-based mechanism, and both predict that a distractor disrupts the retrieval of the target when the retrieval cues match the distractor and the target. Despite these similarities, the two models assume fundamentally different underlying processes for the access of representations in memory. In the LV05 model, retrieval time for an item depends on the activation of the item in memory, with reduced discriminability of an item leading to lower activation and therefore longer retrieval times. By contrast, in the DA model, retrieval time is assumed to be constant, and reduced discriminability only affects the probability of correct retrieval of the target. were the first to formally implement these two competing models and compare their relative predictive performance. Using self-paced reading data from a number interference experiment in German (Nicenboim, Vasishth, Engelmann, & Suckow, 2018), Nicenboim and Vasishth implemented the LV05 and DA models in a Bayesian framework. They showed that (a) the DA has better predictive performance than the activation-based model, but (b) the activation-based model yields a comparable performance to the DA when the variance of the retrieval times is allowed to be different for correct and incorrect retrievals. The computational implementations of the two competing models of retrieval make it possible, for the first time, to investigate their relative performance using a broader range of experimental data. Both LV05 and DA are meant to account for retrieval processes in sentence comprehension in unimpaired populations. An open question is whether these models, which have until now only been investigated in connection with unimpaired processing, can also characterize retrieval difficulty in impaired populations. That is, can the models account for impaired processing through parametric variation? And if they can, what do the changes in the parameters tell us about the impairments? In this paper, we focus on an important and under-studied problem, the underlying nature of retrieval difficulty in individuals with aphasia (IWAs). Aphasia is an acquired neurological condition caused by brain injury that affects language production and comprehension. One question we seek to answer is: Given the two competing models of retrieval processes, which one better characterizes processing difficulty in IWAs? As data, we use the largest dataset currently in existence on sentence comprehension in IWAs. This dataset, reported in Caplan, Michaud, and Hufford (2015), provides listening times (LTs) and picture-selection accuracies from IWAs and matched unimpaired controls. The full dataset involves a range of syntactic constructions and methods, but in this paper, we focus on self-paced listening data on the SR versus OR clause construction, which is a very well-studied construction in psycholinguistics. The present paper is structured as follows. We begin by reviewing prior work on modeling retrieval processes in aphasia. Next, we present the data, our implementation of LV05 and DA, the results of the model comparisons, and a Bayes factors (BFs) analysis. Modeling retrieval processes in aphasia There are several theories about why language processing deficits arise in IWAs. In this paper we focus on processing deficit theories that can be implemented within the framework of cue-based theory and that are of relevance for our modeling work. 1 In particular, we focus on the following accounts: delayed lexical access (Ferrill, Love, Walenski, & Shapiro, 2012), slow syntax (Burkhardt, Avrutin, Piñango, & Ruigendijk, 2008), resource reduction (Caplan, 2012), and intermittent deficiencies (Caplan et al., 2015). The delayed lexical access theory claims that lexical access is delayed in IWAs, and this can cause a slowdown in the formation of a syntactic dependency. Evidence supporting this theory comes from a series of cross-modal lexical priming studies, which combine a listening comprehension and a lexical decision task. Love, Swinney, Walenski, and Zurif (2008) and Ferrill et al. (2012) (inter alia) found that IWAs showed slower lexical activation relative to controls. Some cross-modal lexical priming studies have also revealed that IWAs build syntactic dependencies at a slower-than-normal speed. This has been taken as support for the slow syntax theory (Burkhardt et al., 2008;Burkhardt, Piñango, & Wong, 2003), which posits that a slowdown in syntactic structure building can cause a delayed interpretation or a failure to interpret the sentence. Under this account, the impairment is at the level of syntactic structure formation. Caplan, Waters, DeDe, Michaud, and Reddy (2007) and Caplan et al. (2015) present online and offline data that support the hypothesis that IWAs have a deficit in the resources used in parsing, what they refer to as resource reduction (Caplan, 2012). Complex sentences demand more resources, such as a higher memory load or attention, and therefore, IWAs are more likely to misinterpret complex sentences. Finally, Caplan, Michaud, and Hufford (2013) argue that in addition to a resource reduction, IWAs may exhibit intermittent breakdowns in the parsing system, a theory known as intermittent deficiencies. Some of these accounts have been implemented in the framework of LV05. Patil, Hanne, Burchert, De Bleser, and Vasishth (2016) developed several LV05-based models that implement theories of processing deficits in aphasia. They found that IWAs' processing was better characterized by a model that combined the implementation of slowed processing (understood as a "pathological slowdown in the processing system") and intermittent deficiencies, relative to models that included only one of these deficits. Building on the conclusions of Patil et al. (2016), Mätzig, Vasishth, Engelmann, Caplan, and Burchert (2018) investigated variability among IWAs by implementing slowed processing, intermittent deficiencies, and resource reduction within the LV05 model. The range of parameters estimated for IWAs showed a broad variability, whereas the parameters for control participants were closer to the default parameters of the original LV05 model and displayed a smaller range of variability. These results imply that IWAs are very variable in the extent and nature of their deficits along these three hypothesized dimensions (slowed processing, intermittent deficiencies, and resource reduction). The broader conclusion here is that deficits may lie on a continuum, and along different dimensions. Although Patil et al. (2016) only modeled data from seven IWAs, and Mätzig et al. (2018) modeled offline measures (accuracies), both studies showed that LV05 can account for IWAs' behavior by modifying specific parameters that can be mapped onto theoretically informed assumptions. By doing so, they derived quantitative predictions under the assumptions of theories of deficits in aphasia. However, whether the LV05 model can account for the different hypothesized deficits in a larger dataset with online measures remains to be tested. As discussed in the previous section, there exists another competing model of retrieval processes, the DA. The crucial difference between these two models is that they assume different underlying mechanisms for the access of items in memory. Yet the relative predictive performance of the activation model and of the DA has never been compared using data from both unimpaired and impaired populations. By comparing these two models' predictions with data from IWAs, we aim to investigate the following questions: (a) Can the direct-access mechanism of retrieval also account for sentence processing in IWAs? (b) How do the different parameters of these two models relate to theories of processing deficits in IWAs? (c) Which model provides a better fit to data from IWAs and controls? Investigating these questions would provide new insight into the nature of the dependency completion process in impaired and unimpaired populations. The Caplan et al. dataset makes such a model comparison possible. Below, we begin by revisiting the characteristics of the subset of the Caplan et al. dataset that we use in this paper. The Caplan et al. dataset: Self-paced listening times in relative clauses The empirical data we consider here consist of LTs and picture-selection accuracies from 33 IWAs and 46 controls matched by age and years of education. The original dataset reported in Caplan et al. (2015) included 56 IWAs, but we discarded data from eight IWAs because they were in the early post-acute phase (less than 4 months post-stroke), and from 15 other individuals who had been classified as IWAs but showed no symptoms of aphasia in the Boston Diagnostic Aphasia Exam (Goodglass, Kaplan, & Barresi, 2001). Out of the 11 sentence types in the dataset, we selected the SR and OR constructions (see examples 3a and 3b). This choice was motivated by the fact that RCs have been extensively studied in psycholinguistics, and a great deal is known about RC processing. In English and many other languages, ORs have been uniformly found to be more difficult to process than SRs (Grodner & Gibson, 2005). Moreover, IWAs are known to experience difficulties in the comprehension of OR clauses (Caramazza & Zurif, 1976;Hanne, Sekerina, Vasishth, Burchert, & De Bleser, 2011), especially when the thematic roles of the nouns can be reversed, as in the sentences shown below. (3) a. Subject Relative (SR): The girl who chased the mother hugged the boy. b. Object Relative (OR): The girl who the mother chased hugged the boy. In the experiment reported by Caplan et al. (2015), participants listened to sentences presented word by word, and pressed a computer key whenever they were ready to hear the next word. This yielded an online measure of comprehension: LTs per segment, in milliseconds. At the end of the sentence, participants had to choose which of two pictures displayed on the screen matched the meaning of the sentence they had just heard. This choice yielded accuracy data (correct/incorrect response). An example of the pictures shown in the picture-selection task is displayed in Fig. 1. These pictures correspond to the sentences (3a) and (3b). Of the 20 items corresponding to the SR and OR conditions in Caplan et al. (2015), we only used items 11-20 for our data analysis and modeling. The modeling is limited to these items because it was only in these items that the pictures in the picture-selection task tested the participant's understanding of the meaning of the verb inside the RC (e.g., who chased whom in 3a and 3b). For cue-based retrieval theory, in RCs, the retrieval of the agent of the action expressed by the verb within the RC is the first and key retrieval event (Lewis & Vasishth, 2005). In English, the verb of the subordinate clause (chased in 3a and 3b) does not appear in the same position in SR and OR clauses, and therefore the LTs corresponding to the verb region are not directly comparable. To make the two sentences comparable, we followed the procedure in Traxler, Williams, Blozis, and Morris (2005) and added up the LTs of the noun phrase ("the mother") and the verb ("chased") inside the SR/OR clause. Trials with LTs shorter than 200 ms were discarded (around 2% of the data). In the following section we present descriptive statistics and a Bayesian analysis of the data used for modeling. We analyze the data using the Bayesian framework because this allows us to quantify uncertainty about the estimates of interest (e.g., the difference in LTs for SR and OR clauses). Our statistical inferences are based on 95% credible intervals and means of the estimates; the credible intervals show the range over which plausible values of the parameter lie with 95% probability, given the data and the model. The mean accuracy for controls and IWAs across the two conditions is shown in Fig. 2. For controls, accuracy is above 90% in both conditions, whereas for IWAs accuracy in SRs is 75%, and 63% in ORs. Fig. 3 shows the mean LTs across conditions and groups. IWAs are slower than controls in both conditions. For both IWAs and controls, responses in the OR condition are slower relative to responses in the SR condition. Example of the images shown in the picture-selection task. In the subject relative condition, the picture on the right is the target, whereas the picture on the left is the foil. In the object relative condition, the picture on the left is the target, and the one on the right is the foil. We fit a Bayesian hierarchical model with a lognormal likelihood to the LTs and a Bayesian logistic mixed model to the accuracy data. The analyses were carried out with correct and incorrect trials pooled. We used R (R Core Team, 2020) and the package brms (Bürkner, 2017), which is a front-end for Stan (Carpenter et al., 2017). For both models, the factors group (controls/IWAs), condition (SR/OR), and their interaction were fit as fixed effects. These factors were sum-coded (Schad, Vasishth, Vasishth, Hohenstein, & Kliegl, 2020): SR were coded as −1 and OR as +1; controls as −1 and IWAs as +1. Random intercepts by subjects and items were included, a slope by item was added to the group effect, and a slope by subject was added to the effect of condition. The varying intercepts and slopes were allowed to be correlated. We used so-called regularizing priors, which allow a broad range of parameter values but disallow implausible (or impossible) values. The priors for the model of the accuracies, listed in Eq. 1, are on the logit scale, whereas the priors for the LTs model, listed in Eq. 2, are on the log scale. In the prior specification for the residual standard deviation (σ), the subscript + in the normal distribution prior stands for a normal distribution truncated at 0 (reflecting the fact that standard deviations can never be less than 0). For the correlation matrix of the random effects, we used the so-called LKJ prior (Lewandowski, Kurowicka, & Joe, 2009) with parameter 2; this parameter disfavors extreme correlations like AE1 (Carpenter et al., 2017). The models were fit with four chains and 2,000 iterations, of which 1,000 were warm-up iterations. α ∼ normal 7:5, 0:6 ð Þ β 1,...,3 ∼ normal 0, 0:5 ð Þ σ ∼ normal þ 0, 0:5 ð Þ (2) Fig. 4 shows the posterior distributions of the parameters of interest. In a Bayesian model, the posterior distribution indicates the most likely parameter values given the data and the model. We report the mean estimate for each effect of interest, as well as their corresponding 95% credible interval (CrI). This interval represents the range over which we are 95% certain that the effect lies, given the data and the model. In LTs, large effects for group and condition were found: ORs yield longer LTs (effect of condition: 323 ms CrI: [227,422]), and IWAs are slower than controls (effect of group: 647 ms CrI: [309,1003]). The interaction (−85 ms CrI: [−182, 9]) suggests that the effect of condition could be stronger for controls, but since the CrI overlaps with 0, strong conclusions cannot be drawn from this estimate. Having summarized the inferences that can be made from the data, we now turn to a description of the two models, and the models' evaluation and comparisons. The activation-based model In cognitive psychology, response selection in simple choices is often modeled using accumulation of evidence (Heathcote & Love, 2012;Ratcliff, 1978). Evidence accumulation models assume that when facing a speeded decision, people accumulate noisy samples of information about the different choices that are available, until they have enough evidence to choose one of them (Forstmann, Ratcliff, & Wagenmakers, 2016). Language processing can be seen as a similar process: When listening to a sentence, the comprehender samples evidence from the linguistic input that unfolds over time. Once the retrieval site is encountered, comprehenders have to retrieve an item from memory. argued that the retrieval process assumed in LV05 is conceptually similar to a race model (Rouder, Province, Morey, Gomez, & Heathcote, 2015;Usher & McClelland, 2001), in which each choice is represented with an accumulator of evidence. The speed of the process of sampling evidence in a race of accumulators can be equated to the activation in LV05: The item in memory with the faster rate of accumulation (equivalent to the higher activation in LV05) will be the item retrieved, and the rate of accumulation will determine the latency of the retrieval. In the Caplan et al. (2015) data, the LTs at the RC verb and the second noun phrase serve as a measure of the speed of accumulation of evidence for the retrieval. Because there are two possible interpretations (SR or OR clause), we assume that there are two accumulators racing against each other. For instance, consider again the OR clause (3b), repeated here for convenience as (4): (4) The girl who the mother chased hugged the boy. When the comprehender reaches the verb chased, they need to retrieve a subject that matches the verb. If the comprehender understands the sentence correctly, they should have retrieved mother as the subject of the verb. An alternative possibility is that they accidentally misretrieve girl as the subject of the verb. Under these assumptions, the model has two accumulators: One accumulates evidence for the retrieval of the target (which corresponds to the correct OR interpretation in this example), and the other one accumulates evidence for the retrieval of the distractor girl (which corresponds to the incorrect SR interpretation in this example). The accumulator that finishes faster represents the interpretation chosen. We also assume that, when selecting one of the pictures during the picture-selection task, participants are choosing the interpretation that corresponds to the chunk retrieved from memory at chased (i.e., mother or girl in 4). Implementation of the activation-based model Following , the activation-based model is implemented as a Bayesian lognormal race of accumulators. The Bayesian framework was chosen for two reasons. First, because modern probabilistic programming languages like Stan (Carpenter et al., 2017) make it possible to flexibly define any assumed generative process while including taking individual differences into account. Second, the Bayesian approach to parameter estimation allows the researcher to directly take the uncertainty of the estimates into account (Lee & Wagenmakers, 2014). The model was implemented in Stan. For each trial i, the finishing times FT for the interpretation of a sentence as SR or OR are sampled from two lognormal distributions with scale σ, see Eq. 3. 2 The noise component (σ) is assumed to be different for controls and IWAs. 3 The accumulator with the faster (i.e., lower) FT will represent the winning interpretation, and its sampled value will become the estimated LT for that particular trial i, as shown in Eq. 4. SR accumulator The complete hierarchical model for the two accumulators is presented in Eq. 5. The terms u and w are the by-participant and by-item adjustments to the fixed effects terms; these are the familiar varying intercepts and slopes in linear mixed models (Bates, Maechler, Bolker, & Walker, 2015). All the parameters (which, given the lognormal likelihood, are on the log scale) have regularizing priors, listed in Eq. 6. 4 In the specific context of psycholinguistics, prior specification in hierarchical models is discussed at length in Sorensen, Hohenstein, and Vasishth (2016), Nicenboim and Vasishth (2016), Vasishth, Nicenboim, Beckman, Li, and Kong (2018), and Schad, Betancourt, Betancourt, and Vasishth (2020). The level labeled group had contrast coding −1 for controls, and +1 for IWAs; and the level labeled relative clause type (rc type ) was coded such that SRs were represented as −1 and ORs as +1. SR accumulator The varying intercepts and slopes for subject, u ¼ hu α 1 , u α 2 , u β 3 , u β 4 i, come from a multivariate normal distribution with four dimensions, abbreviated as MVN 4 ; and the varying intercepts and slopes for items, w ¼ hw α 1 , w α 2 , w β 1 , w β 2 i, also come from a multivariate normal distribution with four dimensions, MVN 4 . In the equations below, 0 is a column vector of zeros with the four (participants) or four (items) dimensions. The ∑ are the variance-covariance matrices of the multivariate normal distributions. The fixed effects β have the following interpretations: • β 1 , β 3 , β 5 are the effects of group, RC type, and the group × RC type interaction, respectively, in the accumulator for the SR interpretation. • β 2 , β 4 , β 6 are the effects of group, RC type, and the group × RC type interaction, respectively, in the accumulator for the OR interpretation. • β 7 is the effect of group in the σ parameter. Of interest in this model are the distributions of finishing times in the SR and OR accumulators, in the SR and OR conditions, and in the different population groups (controls vs. IWAs). These are generated in milliseconds once the posterior distributions of all the parameters in the model are estimated. The finishing times for each one of the accumulators in each condition and for each group are estimated taking into account the abovementioned terms β 1,. . .,7 and the adjustments by item and by participant listed in Eq. 5. Predictions In the activation-based model the parameter σ and the finishing times of the accumulators have a theoretically meaningful interpretation. We expect these parameters to show different patterns across groups. The different σ reflect the assumption that for IWAs, the rate of accumulation of evidence can be noisier. A larger estimated σ for IWAs would be consistent with the intermittent deficiencies theory (Caplan et al., 2007), which claims that there are intermittent breakdowns in the parsing system of IWAs. However, the effects of crucial interest are on the finishing times: When the mean finishing time of the incorrect interpretation is similar to the finishing time of the correct interpretation, misretrievals become more likely. We therefore expect that compared to controls, IWAs should have more similar mean finishing times in the two accumulators; controls should have a bigger difference between the mean finishing times of the two accumulators. We also expect both accumulators to be slower for IWAs than for controls because IWAs may need more time than controls to retrieve items from memory and to build the dependency. Such a slowdown could be due to a lexical access deficit (Love et al., 2008) and/ or to slow syntax (Burkhardt et al., 2008). The direct-access model The DA (McElree, 2000) assumes that items (i.e., traces of words or phrases, such as the girl) in memory are accessed via a content-based, direct-access mechanism. That is, the cues set at the retrieval site enable direct access to matching items in memory. The retrieval process is subject to interference and decay: Increasing distance between the target and the retrieval site, or competing items in the sentence can lower the quality of the representation of the target item in memory. In the DA, the probability of retaining a memory representation at the retrieval site is known as the availability of a given item. Crucially, proponents of the DA argue that interference and decay have an impact on the availability of items in memory, but not on retrieval latencies. That is, whereas the probability of retrieving an item decreases as a function of the complexity of a sentence, complexity does not affect retrieval times. The DA has been developed and tested within the speed-accuracy tradeoff paradigm (SAT) by McElree and colleagues (Martin & McElree, 2008McElree et al., 2003), inter alia. They consistently found that the asymptote of the SAT function (which assesses successful retrieval of the target and/or quality of the retrieved representation) decreased as a function of sentence complexity. By contrast, the intercept and the rate of the SAT function (which assess processing speed) did not show a significant effect of complexity. Based on these findings, McElree and colleagues argue that interference and/or decay affect the probability of retrieving the target, but not the retrieval speed. In addition, it is assumed that low availability can cause a failure in parsing or the retrieval of a distractor item. On some trials, this initial failure could be followed by a reanalysis process (Martin & McElree, 2008;McElree, 1993;McElree et al., 2003;Van Dyke & McElree, 2011). Implementation of the direct-access model We follow by implementing the DA as a two-component Bayesian mixture model. The key assumptions of the DA are thus that retrieval cues enable direct access to the item's memory representation at the retrieval site, and that the retrieval of an item takes an average time t da . Differences in availability can lead to an initial incorrect retrieval of the distractor item. McElree and colleagues assume that on a certain proportion of trials, after a failure in parsing, comprehenders could engage in a "costly reanalysis process" (Martin & McElree, 2008). We formalize this assumption with two main parameters: P b , which is the probability of backtracking (what McElree and colleagues call reanalysis), and δ, which is the extra time needed for backtracking. This extra time is independent of the retrieval time t da . Notice that these two parameters (P b and δ) are not part of the SAT paradigm, and they constitute an implementation of McElree and colleagues' assumption of reanalysis. The model is shown schematically in Fig. 5. The parameter θ is the probability of correctly retrieving an item on the first retrieval attempt. This probability is allowed to vary across conditions, as it is assumed by McElree et al. (2003) that sentence complexity can have an impact on the availability of the items, and therefore on their retrieval probability. If an initial misretrieval or failure in parsing occurs at the retrieval site, a backtracking process is initiated with probability P b that, by assumption, always results in correct retrieval of the target (McElree, 1993). There are four fixed-effects parameters that have to be estimated in this model. For the parameter θ we define varying intercepts by participants and by items, and varying slopes for the effect of RC type (by participants) and group type (by items). The parameter µ represents the estimated log mean LTs at the critical region. Since the DA assumes that the retrieval time of an item takes on average t da log ms and is not affected by sentence complexity, RC type was not included as a fixed effect for the parameter µ. However, we assume that IWAs, given their impairment, could have a higher µ compared to controls and therefore add a main effect of group. That is, we assume that IWAs may differ in the average time they need to process the critical region relative to controls. Notice that t da is a latent variable that is part of µ, since we cannot directly compute t da from the observed LTs. The probability of backtracking, P b is also not assumed to vary across conditions, and thus only has an adjustment for group and a varying intercept by-participants because we assume that IWAs could have a different P b relative to controls. The parameter δ is the cost of backtracking, that is, the time (in log ms) that the backtracking process takes, and has an adjustment for group. The standard deviation σ also has a main effect of group. As in the activation-based model, the terms u and w are the by-participant and by-item adjustments to the fixed effects terms. As with the activation-based model, all the parameters (which are on the logit scale for probabilities and on the log scale for LTs) have regularizing priors, listed in Eq. 11. The level group had contrast coding −1 for controls, and +1 for IWAs; and rc type was coded −1 for SR clauses and +1 for ORs. The complete hierarchical model for all the parameters is shown in Eqs. 9 and 10. The mixture process is shown in Eq. 9, and the parameters and their priors are defined in Eq. 10. LT ∼ lognormal μ, σ ð Þ, retrieval succeeds, probability θ lognormal μ þ δ, σ ð Þ, retrieval fails initially, probability 1 À θ In Eq. 10, the varying intercepts and slopes for subject, u ¼ hu μ 0 , u α , u β 2 , u γ i, come from a multivariate normal distribution with four dimensions, abbreviated as MVN 4 ; and the varying intercepts and slopes for items, w ¼ hw μ 0 , w α , w β 3 i, come from an MVN 3 distribution. In the equations below, 0 is a column vector of zeros with the four (participants) or three (items) dimensions. The fixed effects β have the following interpretations: • β 1 is the effect of group on the average time needed to listen to the critical region. • β 2 , β 3 , β 4 are the effects of RC, group, and the group × RC interaction, respectively, on the probability of a first correct retrieval. • β 5 and β 6 are the effect of group on the probability of backtracking and on the estimated backtracking time, respectively. • β 7 is the effect of group on σ. Consider the three possible scenarios according to the DA, and their corresponding paths shown in Fig. 5. Case (i): The target is retrieved through a direct-access mechanism based on the cues set at the retrieval site, with probability θ. In this case, LTs are assumed to be drawn from a lognormal distribution with mean µ and standard deviation σ: LT ∼ lognormal μ, σ ð Þ: Case (ii): The distractor is initially retrieved, but backtracking leads to the target being retrieved, with probability (1 − θ) × P b . Once θ (the probability of initial correct retrieval) has been estimated, (1 − θ) yields the probability of an initial incorrect retrieval. The probability of backtracking is assumed to be independent of θ. Thus, multiplying P b with (1 − θ) yields the probability of correctly retrieving the target after an initial misretrieval and subsequent backtracking. In this case, the LTs are drawn from a lognormal distribution with mean µ + δ, which is the cost of backtracking, and standard deviation σ: Case (iii): The distractor is initially retrieved and there is no backtracking, with probability 1 À θ ð ÞÂ 1 À P b ð Þ. In this case, we multiply the probability that the first retrieval is incorrect with the probability that there is no backtracking. Here, the LTs are drawn from a lognormal distribution with mean µ and standard deviation σ: LT ∼ lognormal μ, σ ð Þ, and a misretrieval is predicted. Notice that incorrect answers without backtracking in case (iii) are expected to have similar LTs to correct answers without backtracking, case (i), whereas in case (ii), longer LTs should be observed due to the extra time needed for backtracking. As such, in this model, the distribution of LTs associated with correct responses is a mixture of initially retrieved targets (i), and initial misretrievals plus backtracking (ii). Predictions The parameters θ, µ, P b , δ, and σ have a group adjustment because they are expected to differ between controls and IWAs. We present here a short theoretical explanation of the interpretation of these parameters. We expect a lower estimate of the probability of correct initial retrieval, θ, for IWAs, in OR clauses. This would be in line with resource reduction. Complex sentences are assumed to require more processing resources, because additional linguistic operations need to be carried out and more material has to be kept in working memory (Caplan, 2012). This suggests that IWAs should show a lower probability of initial correct retrieval in ORs relative to SRs. The different µ for controls and IWAs reflect the assumption that IWAs may need more time for parsing. This assumption can be linked to slowed processing theories, which would explain the slowdown in terms of lexical access (Love et al., 2008) or syntactic processing (Burkhardt et al., 2008). We expect IWAs to have a lower probability of backtracking: If the model predicts IWAs to backtrack, but not as often as controls, this could also be in line with the resource reduction hypothesis (Caplan, 2012). In unimpaired sentence comprehension, the DA model assumes that backtracking is a mechanism used on a certain proportion of trials when the initial interpretation of the sentence fails. If IWAs show a lower probability of backtracking, this could mean that even though they can backtrack, they do not do it as often as controls because the mechanism is disrupted. Alternatively, the P b parameter could also be linked to intermittent deficiencies, because the process of backtracking could be intermittently disrupted. In addition, we expect the cost of backtracking, δ to be higher for IWAs. This would reflect delayed syntactic processing (Burkhardt et al., 2008). Finally, a larger σ would imply more noise in the retrieval mechanism for IWAs. This would be consistent with the intermittent deficiency hypothesis (Caplan et al., 2007) that postulates that IWAs suffer from intermittent reductions in the resources used in parsing. Results of the activation-based race model We used the rstan package (Stan Development Team, 2020) to fit the models, with three chains, 6,000 iterations, and a warm-up of 3,000. 5 The chains were plotted and visually inspected for convergence. An additional metric of convergence is the so-called Rhat statistic (the ratio of between-to-within chain variance); when the sampler has converged, the Rhat statistic is close to 1 (Gelman et al., 2014). We checked that Rhats were always near 1. Two tuning parameters, delta and the tree depth, 6 were adapted when necessary for achieving convergence. Following Gelman et al. (2014), we also made sure that the parameters of the model could be recovered using simulated data (see the online supplementary materials). The activation-based model assumes that for each trial, LTs are drawn from the two accumulators, and the accumulator with the fastest LT wins the race. The two distributions of finishing times (i.e., the finishing time of each one of the accumulators for each trial) can be plotted against each other, so as to assess the precise predictions of the model. For example, Fig. 6 shows the distribution of finishing times for the correct and the incorrect interpretation for each of the two groups, and across the two conditions. Fig. 6a,b displays the accumulators for controls, while 6c and 6d stand for IWAs' accumulators. Fig. 6a displays the distribution of finishing times associated with the accumulator for the correct interpretation (SR) in dark gray, and for the incorrect interpretation (here OR) in light gray, for controls. The distribution for SR is clearly faster: The mean of the finishing times for the SR accumulator is 1,204 ms, whereas the mean finishing time for the OR accumulator is around 4,000 ms. In Fig. 6b, finishing times for the correct interpretation (OR, in light gray) are faster on average (1,655 ms) than the finishing times for the incorrect interpretation (SR, in dark gray, 4,647 ms). Therefore, Fig. 6a,b indicates that controls tend to choose the correct interpretation, since the distributions associated with the correct interpretations have faster finishing times. Fig. 6c shows that IWAs also tend to choose the right interpretation in SRs. The mean of the accumulator for SR in the SR condition is 2,694 ms, whereas the mean of the OR accumulator is 4,717 ms. However, Fig. 6d indicates that it is difficult for IWAs to differentiate between the two interpretations in the OR condition (6d), where the two distributions show greater overlap. On average, the accumulator for the correct interpretation is faster: The estimated mean for the OR accumulator in the OR condition is 3,573 ms, whereas the estimated mean for the SR accumulator in the OR condition is 4,553 ms. But the overlap between the two distributions shows that the accumulator for the incorrect interpretation is sometimes as fast as the one for the correct interpretation. Therefore, the model predicts a difficulty for IWAs in distinguishing between the correct and interpretation in ORs. Fig. 6 shows that the model exhibits the predicted patterns: The means for the finishing times across conditions are slower for IWAs than for controls. For IWAs, the mean finishing times of the accumulator in the OR condition are more similar than for controls. We also predicted IWAs to have a higher σ because we assumed that their rate of accumulation could be noisier, and the model estimates reflect this prediction, as displayed in Fig. 7. Posterior predictive checks In order to evaluate the performance of the model, we compared the empirical data against the posterior predictive distributions estimated by the model (Gelman et al., 2014), a procedure that is known as posterior predictive checks (PPCs). We present the PPCs graphically, with violin plots, where the dots represent the mean of the empirical data. This is a way to inspect whether the data could have been generated by the models: If the mean of the empirical data is predicted by the model, that is, if the dot lies within the violin plots, the model could have generated the data. If the model is unable to reflect the distribution of the data, that implies a bad fit. Fig. 8 shows the PPCs for the activation-based model in the picture-selection accuracies. In general, the activation-based model predicts the observed accuracies for both groups and conditions. Fig. 9 shows the PPCs corresponding to the LTs. The model can correctly estimate the LT distribution of the data across conditions and groups, although it tends to overestimate the LT for controls in incorrect responses. Results of the direct-access model The DA was fit with three chains and 7,000 iterations, and a warm-up of 3,500. The chains were visually inspected, and we verified that all the Rhats were close to 1. Delta and the tree depth parameters were adapted when necessary, and we made sure that the parameters of the model could be recovered using simulated data. The DA model has three critical parameters: the probability of initial correct retrieval, θ, the probability of backtracking if the initial retrieval is not correct, P b , and δ, which is the time taken for backtracking. We turn now to assess the posterior distributions for these parameters across groups and conditions. The posterior distribution of θ (Fig. 10a) indicates that in SRs, controls initially retrieve the target 83% of the time, whereas IWAs have a lower probability of initial correct retrieval, 69%. However, in ORs, the probability of initial correct retrieval is 41% for controls, and 53% for IWAs. We discuss this surprising outcome below. Regarding the probability of backtracking, the posterior distribution of the parameter P b (Fig. 10b) indicates that controls perform backtracking around 82% if they initially retrieve the distractor, whereas IWAs backtrack 21% of the time. Notice that the parameters θ (Fig. 10a) and P b (Fig. 10b) are interrelated, and they should be interpreted together. The interpretation of both parameters shows that: a. Controls initially carry out a retrieval that leads to the correct interpretation most of the time in SRs (83%), and 41% in ORs. If the first retrieval was incorrect, they backtrack and get the correct interpretation in 82% of the cases. b. IWAs are estimated to retrieve the correct interpretation without backtracking for SRs about 69% of the time and for ORs 53% of the time. However, IWAs backtrack only 21% after an incorrect first retrieval. Therefore, misretrievals are more likely for IWAs than controls, especially in ORs. Fig. 11 shows the estimated time needed for backtracking. The posterior of δ shows that backtracking takes less time for controls, with a mean centered around 546 ms. By contrast, IWAs' estimate for δ is higher, around 678 ms. We predicted IWAs to have a lower probability of backtracking relative to controls, and Fig. 11 shows that the model confirms our prediction. We also predicted controls to have higher values for µ and σ. The model estimates are in line with these predictions (see Fig. 12). However, the model's estimates contradict our prediction about θ: We had assumed that due to resource reductions, IWAs should have a lower probability of initial correct retrieval in ORs. This surprising outcome in the DA is an inherent shortcoming of the model, at least under the assumptions made here. We discuss alternative explanations in Section 9. Posterior predictive checks As with the activation-based model, we graphically compare the distribution of the empirical data with the estimated posteriors of the model. Fig. 13 shows the PPCs corresponding to the picture-selection accuracies. In general, the model correctly predicts the qualitative pattern of the observed accuracies. Fig. 14 shows that the model estimates the LTs across conditions and groups, but it tends to underestimate the LTs for incorrect responses, and overestimate the correct responses in the SRs condition for IWAs. Quantitative comparison of the activation model and the direct-access model Although PPCs offer a visual way to assess the descriptive adequacy of the models, a more quantitative way of model assessment is required, in order to measure which model fits the data better. We compared the predictive accuracy of the models using 10-fold cross-validation (Vehtari, Gelman, & Gabry, 2017). Cross-validation in the Bayesian framework allows for comparisons of models that assume different generative processes for the data, such as the two models in this study. A 10-fold cross-validation involves splitting the dataset into 10 subsets of balanced data (balanced here means that each participant contributes approximately the same amount of data). One of the subsets is held out, and the model is fit to the nine remaining subsets. The posterior distributions of the parameters of this model are used to compute predictive accuracy on the subset of heldout data. This procedure is then repeated 10 times, one for each subset of held-out data. The difference between predicted and observed held-out data points is used to compute a measure of predictive accuracy: the expected log point-wise predictive density, or d elpd. When comparing two models, the model with the higher d elpd value is the model that represents a better fit to the data. The standard deviation of the sampling distribution of d elpd diff , the difference in d elpd, can also be computed, and has the standard frequentist interpretation: d elpd diff AE 2 Â SE can be interpreted as a 95% confidence interval. . However, the relatively large standard error means that the difference in the predictive performance of the models is not decisive. Table 1 details the difference in d elpd by condition and group, and their corresponding SE. Although the activation-based model consistently shows an advantage across conditions and groups, the standard errors indicate that the differences are not decisive. In this section, the relative performance of the models was assessed. We turn now to assess the relative importance that the individual parameters within each model have, in terms of explaining the data from IWAs. Model evaluation using Bayes factors The estimates from the activation-based model and the DA show that IWAs behave differently from controls. As discussed in the previous sections, given our linking assumptions, the different parameter estimates for the two groups can tell us whether the deficits that we link to the different parameters can explain IWAs' data. For instance, the larger σ that IWAs have in both models (relative to controls) indicates that intermittent deficiencies may be one of the causes of IWAs' processing difficulties. One question that arises is, to which extent is there evidence that these deficits are playing a role in IWAs' sentence comprehension? By assumption, both models had group adjustments in all of the parameters. These adjustments reflect the difference between IWAs and controls. However, if the group adjustment of a given parameter does not improve the model fit (i.e., the model would perform better if no difference was assumed between IWAs and controls), this could mean that the processing deficit we are linking to this parameter may not be playing a role in impaired sentence comprehension. One way to assess whether the group adjustments improve the models' fit is to compute a series of BFs. The BF quantifies the evidence against or in favor of a null model (M0) that does not assume an effect of group (no β adjustment for the group factor), relative to a model that assumes a group effect (M1). The BF is a ratio of marginal likelihoods (as shown in Eq. 14), and it indicates how likely it is that the data have been generated by one model relative to the other one. In Eq. 14, the subscript in BF 10 stands for the order of the models: Evidence of M1 over M0. The interpretation of BF is done in terms of relative odds. For instance, a BF 10 of 5 means that the odds are 5:1 in favor of M1. A BF closer to 1 is inconclusive, whereas a BF 10 larger than 1 indicates evidence in favor of M1, and BF 10 below 1 indicates evidence in favor of M0. The BF has a continuous scale (meaning the higher the BF 10 , the stronger the evidence for M1). There is no specific cutoff for the interpretation of the strength of the evidence in favor of a model over the other one, but guidelines have been proposed (Jeffreys, 1939(Jeffreys, /1998. In general, a BF 10 larger than 100 is considered as strong evidence in favor of M1. Conversely, a BF 10 of 1/100 or smaller is considered as strong evidence in favor of M0. BF and cross-validation are two different ways to perform model comparisons. Crossvalidation is well suited for comparing models with different generative processes (such as the activation-based model vs. the DA), but cross-validation may be problematic with models that make very similar predictions. In this case, the estimated standard error might be biased (Sivula, Magnusson, & Vehtari, 2020). Since our model evaluation at the parameter level involves comparing nested models that are likely to make similar predictions, in this section we use BFs instead of cross-validation. In what follows we perform a BF analysis for each parameter of the two models that has an adjustment for the group factor. For instance, for the σ parameter in both models, the M0 (null model) and M1 would be as shown in Eq. 15. Because BF is known to be sensitive to the choice of priors (Rouder, Haaf, & Vandekerckhove, 2018), we ran M1 with three different standard deviations for the prior of the β of interest (the adjustment for group) in order to show how the BF changes as a function of the prior standard deviation. The prior was always centered at 0 and the standard deviations were 0.1, 0.3, and 0.5. In addition, we included the following constraints: i. For the parameter µ in both models, and δ in DA, the group β in M1 was constrained to be positive. These parameters reflect the mean LTs and the time needed for backtracking, respectively. Therefore, according to theory, due to slow syntax and/or delayed lexical access, IWAs should be slower than controls. Because the contrast coding is +1 for IWAs and −1 for controls, a positive β would indicate that controls are faster than IWAs, as shown in Eq. 16. LT ∼ lognormal μ þ δ, σ ð Þ: ii. Similarly, for the parameter σ in both models, the group β was also constrained to be positive, since according to intermittent deficiencies, IWAs should have more noise in the processing system. iii. We assumed that the probability of initial correct retrieval and the probability of backtracking could be linked to the resource reduction hypothesis. Therefore, IWAs should show a lower θ and P b estimate, and the group β was thus constrained to be negative. Since IWAs are contrast coded +1, a negative β would imply a lower estimated probability for IWAs. iv. In the activation-based model, a condition × group interaction is assumed on the µ parameter. The priors for the effect of this interaction should be vague because there is no prediction about the direction of the effect. One could assume that (a) IWAs are more affected by the condition manipulation than controls, or (b) IWAs are less affected by the condition manipulation than controls, because IWAs perform poorly in both conditions. Therefore, the β for the interaction did not have any constraint. And similarly, the β for the interaction in the θ parameter in DA was not constrained either. A summary of the models that were run and their corresponding prior SD is shown in Table 2. All the BF were computed using the bridgesampling R package (Gronau, Singmann, & Wagenmakers, 2017) after running the models for 40,000 iterations. In addition, some of the models were run three times in order to confirm that the number of iterations was high enough to produce stable BF. Notice that for all parameters, M0 is the model that has no adjustment for the group effect. Three versions of M1 were run, each with a different prior SD for the group adjustment, as shown in Table 2. In the case of parameters with an interaction, nine versions of M1 were run, one for each possible combination of the prior SD of the two adjustments (the β for the group effect and the β for the interaction condition × group). Activation-based model In the activation-based model there are two µ parameters, one for each accumulator of evidence, µ SR and µ OR . M0 µ does not include any adjustment for the effect of group or the interaction group × condition, for any of the two accumulators. M1 µ includes an adjustment for the effect of group and another adjustment for the interaction, for both accumulators. This is shown in more detail in Eq. 17. The BF results are summarized in Table 2. The BFs for µ in the activation-based model are either inconclusive or yield anecdotal evidence in favor of the model that does not assume a difference between controls and IWAs (M0 µ ). 7 In contrast, the BF results for σ yield strong evidence in favor of M1 σ : The model with a group adjustment for σ provides a better fit. This suggests that the group adjustment in σ could be sufficient to explain the differences between the two groups. Given our linking assumption, this means that the activation-based model estimates intermittent deficiencies to be the main source of processing deficits in IWAs. ACT stands for the activation-based model, and DLA stands for delayed lexical access theory. The columns "Group SD" and "Inter. SD" show the different prior SD of the β adjustments to the effect of group and the interaction group × condition, respectively. In the "Group SD" column, a plus sign indicates that the β for the group adjustment was constrained to be positive, and a minus sign indicates that the β was constrained to be negative. No constraints were applied to the β of the interactions. The column "BF 10 " summarizes the range of BF results for the priors shown in the table Direct-access model In the DA, the θ parameter also has a β for the interaction group × condition in addition to the β for the group effect. For the BF analysis, the M0 θ does not have any of these β, whereas the M1 θ has both, as shown in Eq. 18. Nine versions of M1 θ models were run (see Table 2), such that all possible combinations of prior SD for both adjustments could be considered. All BF for θ yield strong evidence in favor of M1, the model that assumes that IWAs have a lower probability of initial correct retrieval relative to controls (due to resource reductions). Irrespective of the prior SD, the BFs for µ and σ yield strong evidence in favor of M1. The BF for P b yields anecdotal to strong evidence in favor of M1 depending on the priors. In general, all of these parameters benefit from a group adjustment. By contrast, the BF for δ yields some evidence in favor of M0, suggesting that the group adjustment is not needed. Recall that δ is the time needed for backtracking, and that estimated LTs for trials with backtracking are drawn from (µ + δ, σ). The BF for δ could indicate that the group β is redundant because µ and σ (with their corresponding group adjustments) already explain the differences between controls and IWAs. This means that IWAs may not have an impairment in the mechanism of backtracking. That is, IWAs perform backtracking less often than controls (as estimated in P b ), but when they do backtrack, the mechanism is not disrupted. These results suggest that the DA accounts for slow syntax and/or delayed lexical access in µ (mean LTs), but not in δ (time needed for backtracking). In conclusion, the BF analyses at the individual parameter level revealed that in the activation-based model, an increased noise value for IWAs can explain the processing differences between IWAs and controls, which speaks in favor of the intermittent deficiencies theory. The model could also be in line with slow syntax and/or delayed lexical access, but the BF for the parameter linked to these theories was inconclusive, so the role of these deficits in the activation-based model remains unclear. By contrast, the DA is in line with a mixture of slow syntax and/or delayed lexical access, resource reduction, and intermittent deficiencies. Discussion In this study we presented a Bayesian implementation of two models of cue-based retrieval: the activation-based model and the DA. We linked the parameters of these models to major theories of processing deficits in sentence comprehension in aphasia, namely slow syntax, delayed lexical access, resource reduction, and intermittent deficiencies. The predictive performance of the two models was assessed with 10-fold cross-validation, and the quantitative and qualitative predictions of the models concerning data from IWAs and controls have been discussed. A BF analysis was performed, in order to quantify the evidence that the models had with respect to the different processing deficits that were evaluated. In what follows we discuss some unexpected aspects of the DA, we compare our findings to prior computational modeling work in the field of aphasia, and we point out some limitations of the present work as well as future directions. Unexpected behavior of the direct-access model The DA estimates IWAs to have a higher probability of initial correct retrieval in ORs relative to controls, which is surprising, since ORs are generally more difficult to process for IWAs than for controls (Caramazza & Zurif, 1976). However, this prediction would be in line with studies showing that unimpaired controls have an agentfirst preference: Unimpaired controls tend to interpret the first NP of a clause as the agent, which clashes with the actual thematic relations in some constructions (Hanne, Burchert, De Bleser, & Vasishth, 2015;Mack, Wei, Gutierrez, & Thompson, 2016). For instance, in an eye-tracking experiment involving a sentence-picture matching task with active and passive sentences such as (5a) and (5b), Mack et al. (2016) found that unimpaired controls showed initial agent-first processing followed by a thematic reanalysis. That is, in passive sentences, controls tended to initially look at the image in which the first noun phrase was the agent. After hearing the region that contained the disambiguating morphological information (i.e., the verb: visiting/visited), controls started fixating the target picture. This implies that controls, after processing the morphological cues, had to reanalyze the initial agent-first interpretation. By contrast, in the study of Mack et al. (2016), IWAs did not show signs of agent-first processing: They looked at the target and distractor pictures equally prior to the arrival of the disambiguating information. Previous studies where controls showed an agent-first bias used eye-tracking and the visual world paradigm, but our modeling suggests that the agent-first bias could also be detected in a self-paced listening experiment. In our data, if unimpaired controls experienced an initial agent-first bias in ORs, they would initially parse the sentence as an SR. Consider sentence (6). Once they hear the disambiguating region (e.g., second noun phrase in sentence 6), they would have to backtrack on a high proportion of trials to end up with the right thematic interpretation. In this regard, the estimates for controls in ORs would be in line with an initial agent-first strategy. However, a replication of these estimates would be needed, ideally with visual-world eye-tracking data, as in Hanne et al. (2015) or Mack et al. (2016). (6) OR: The girl who the mother chased hugged the boy. Finally, a major issue for the DA model is the fact that the data show longer LTs for incorrect responses. This pattern contradicts the core assumptions of the model, because correct responses are expected, on average, to take longer due to the cost of backtracking. 8 Intuitively, IWAs' incorrect responses may be associated with longer LTs because after backtracking IWAs may not be able to retain the retrieved representation. The slow syntax and the resource reduction hypotheses would be compatible with this view. However, the data show that the incorrect responses of unimpaired controls are also associated with longer LTs relative to correct responses. Therefore, the assumption that backtracking leads to the retrieval of the target (McElree, 1993) seems incompatible with our data. Comparison with previous computational modeling work on aphasia Taken together, the higher d elpd value in favor of the activation-based model, plus the fact that the DA underestimates the LTs for incorrect responses in ORs, suggests that the activation-based model is better at characterizing the processing of RCs in IWAs and controls. The BF analysis for the activation-based model highlights the role that intermittent deficiencies may be playing in an activation-based mechanism of retrieval, but slow syntax and/or delayed lexical access should not be ruled out, since the BFs for the parameters associated with these theories were rather inconclusive. Our results are consistent with previous sentence processing modeling work on aphasia. For example, Patil et al. (2016) found that the LV05 model that included slowed processing (understood as a slowdown in the parsing mechanism) and intermittent deficiencies showed the best fit to data from IWAs, relative to models that included only one of these deficits. It is also possible that IWAs may exhibit different degrees of these deficits, as suggested by Mätzig et al. (2018), who modeled the accuracies of the Caplan et al. (2013) dataset estimating ACT-R parameters at the individual level. Interestingly, their modeling also revealed that intermittent deficiencies was the deficit that affected most of the IWAs. Out of the 56 IWAs, 53 showed a higher noise value (relative to the default noise value in ACT-R) in OR clauses. Unfortunately, we do not have enough LT data to get robust parameter estimates at the individual level, but our modeling suggests that on average, IWAs are more subject to intermittent deficiencies than to slow syntax and/or delayed lexical access. One caveat that applies to Patil et al. (2016), Mätzig et al. (2018), and our own work, is that the models cannot distinguish between slow syntax and delayed lexical access. In our implementation of the activation-based model and the DA, one possibility would be to include a shift parameter (Rouder, 2005) that accounts for lexical access, as implemented in . Ideally, this parameter should have a group adjustment (to assess whether there is a delay in lexical access in IWAs on average, taking the estimate for controls as reference), and an individual adjustment, to assess to which extent each individual is affected by this deficit. Unfortunately, such parameter could not be fit due to data sparsity. Another issue to consider is that our modeling is limited to sentence comprehension. There is important modeling work in the aphasia literature that focuses on lexical processing (Evans, Hula, & Starns, 2019;Mirman, Yee, Blumstein, & Magnuson, 2011), the interface between lexical access and word production (Dell, Lawler, Harris, & Gordon, 2004), and word production (Walker, Hickok, & Fridriksson, 2018), among others. Ideally, a model of impairments in IWAs should account for both aphasic comprehension and production, and disentangle the difficulties that arise from lexical and syntactic processes. However, as we show in this study, there is no single parameter that can account for aphasic impairments, and it is very unlikely that a computational model, even with a larger number of parameters, could account for all the particularities of aphasic performance, which is variable in nature. Nevertheless, we believe that more computational modeling is needed in the field of aphasia, in order to better understand the underlying nature of language impairments in IWAs. Computational models require researchers to formalize hypotheses and assumptions, which is essential for theory development (Guest & Martin, 2020). Some limitations of the present work and future directions An important limitation of the present work is that even though the Caplan et al. (2015) dataset on IWAs and age-matched controls is the largest currently in existence, the data are still relatively sparse compared to standard datasets used for similar model comparisons in psycholinguistics, both in terms of the number of items (10) and participants (33 IWAs and 46 controls). For example, compared the predictive performance of the activation-based model and DA from reading time data from some 180 participants. It would be useful to revisit these model comparisons with larger datasets in the future. Another important step will be to test the two models against new experimental designs and with different experimental paradigms. This would allow for a more comprehensive evaluation of the differences between the models, as well as an assessment of their predictive ability when modeling interference effects in different tasks, languages, and conditions. We are currently compiling a comprehensive database containing several tasks and conditions of data from IWAs and unimpaired controls in (Pregla, Lissón, Vasishth, Burchert, & Stadie, 2020). In future work, we intend to use this database to further evaluate the models discussed here. Conclusion We compared the predictive performance of two competing models of cue-based retrieval using data from IWAs and age-matched controls. We tested whether the two models-the activation-based model and the DA-could account for experimental data from both IWAs and controls. This is the first study where competing models of cuebased retrieval have been tested against data from impaired populations. We also investigated the relative importance of the various parameters in both models using BFs. The BF analyses show that in the activation-based model, intermittent deficiencies (Caplan et al., 2015) best explains the behavioral data from IWAs, although slow syntax (Burkhardt et al., 2008) and delayed lexical access (Ferrill et al., 2012) may also play a role. In the DA, the behavior of IWAs is best explained in terms of a combination of slow syntax, delayed lexical access, resource reduction (Caplan, 2012), and intermittent deficiencies. The model comparisons show that both models have a similar performance for outof-sample predictions (assessed with 10-fold cross-validation), with a slight advantage for the activation-based model. In closing, we have presented the first-ever computational evaluation of different models of dependency completion, using the largest available database from IWAs and unimpaired controls that currently exists. Our work lays out a systematic workflow that can be used to quantitatively compare the predictions of competing models of language processing. phrase has been retrieved (i.e., the first noun phrase is interpreted as the agent) versus the interpretation of a sentence as OR, where the second noun phrase is retrieved as the agent. 3. We also fit a model with different variances for correct and incorrect responses, as introduced in . However, the quantitative difference in predictive performance between the model with a single variance and the model with two variances was negligible. Both models show a comparable quantitative fit to the data. Here, we report the model with a single variance for correct and incorrect responses. 4. The prior distributions of the main parameters are plotted in the online supplementary materials. 5. The code for both the activation-based and the direct-access models is available at https://bit.ly/3lda7Qj. 6. The adjustment of these tuning parameters (adapt_delta, max_treedepth) leads to the whole posterior distribution of the parameters being correctly explored by the Hamiltonian Monte Carlo algorithm used in Stan. See the Stan manual or the short guide on warnings for more information (https://mc-stan.org/misc/warnings). 7. A series of tables and plots showing the BF as a function of the priors for all of the parameters in both models is available in the online supplementary materials. 8. Notice, however, that due to random noise the model estimates slower incorrect responses in some trials, as shown in the tails of the distribution for incorrect responses in Fig. 14.
2020-05-28T09:14:10.463Z
2020-05-22T00:00:00.000
{ "year": 2021, "sha1": "167dd3bfd872f9c950da1f856853b3874df443b4", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cogs.12956", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "69d972ff6ba4c8bdb56281a941033fa3f0760a47", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
92429
pes2o/s2orc
v3-fos-license
BioEve Search: A Novel Framework to Facilitate Interactive Literature Search Background. Recent advances in computational and biological methods in last two decades have remarkably changed the scale of biomedical research and with it began the unprecedented growth in both the production of biomedical data and amount of published literature discussing it. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also pave the way to discover hitherto unknown information implicitly conveyed in the texts. Results. We developed a novel framework (named “BioEve”) that seamlessly integrates Faceted Search (Information Retrieval) with Information Extraction module to provide an interactive search experience for the researchers in life sciences. It enables guided step-by-step search query refinement, by suggesting concepts and entities (like genes, drugs, and diseases) to quickly filter and modify search direction, and thereby facilitating an enriched paradigm where user can discover related concepts and keywords to search while information seeking. Conclusions. The BioEve Search framework makes it easier to enable scalable interactive search over large collection of textual articles and to discover knowledge hidden in thousands of biomedical literature articles with ease. Background Human genome sequencing marked the beginning of the era of large-scale genomics and proteomics, leading to large quantities of information on sequences, genes, interactions, and their annotations. In the same way that the capability to analyze data increases, the output by high-throughput techniques generates more information available for testing hypotheses and stimulating novel ones. Many experimental findings are reported in the -omics literature, where researchers have access to more than 20 million publications, with up to 4,500 new ones per day, available through to the widely used PubMed citation index and Google Scholar. This vast increase in available information demands novel strategies to help researchers to keep up to date with recent developments, as ad hoc querying with Boolean queries is tedious and often misses important information. Even though PubMed provides an advanced keyword search and offers useful query expansion, it returns hundreds or thousands of articles as result; these are sorted by publication date, without providing much help in selecting or drilling down to those few articles that are most relevant regarding the user's actual question. As an example of both the amount of available information and the insufficiency of naïve keyword search, the name of the protein p53 occurs in 53,528 PubMed articles, and while a researcher interested specifically in its role in cancer and its interacting partners might try the search "p53 cancer interaction" to narrow down the results, this query still yields 1,777 publications, enough for months of full-time reading [1]. Nonetheless, PubMed is a very widely used free service and is providing an invaluable service to the researchers around the world. In March 2007, PubMed served 82 million (statistics of Medline searches: http://www.nlm.nih.gov/bsd/medline growth.html) query searches and the usage is ever increasing. A few commercial products are currently available that provide additional services, but they also rely on basic keyword search, with no real discovery or dynamic faceted search. Examples are OvidSP and Ingenuity Answers, both of which support bookmarking as one means of keeping track of visited citations. Research tools such as EBIMed (EBIMed: http://www.ebi.ac .uk/Rebholz-srv/ebimed/index.jsp) [2] and AliBaba (AliBaba: http://alibaba.informatik.hu-berlin.de) [3] provide additional cross-referencing of entities to databases such as UniProt or to the GeneOntology. They also try to identify relations between entities, such as protein-protein interactions, functional protein annotations, or gene-disease associations. Search tools should provide dedicated and intuitive strategies that help to find relevant literature, starting with initial keyword searches and drilling down results via overviews enriched with autogenerated suggestions to refine queries. One of the first steps in biomedical text mining is to recognize named entities occurring in a text, such as genes and diseases. Named entity recognition (NER) is helpful to identify relevant documents, index a document collection, and facilitate information retrieval (IR) and semantic searches [4]. A step on top of NER is to normalize each entity to a base form (also called grounding and identification); the base form often is an identifier from an existing, relevant database; for instance, protein names could be mapped to UniProt IDs [5,6]. Entity normalization (EN) is required to get rid of ambiguities such as homonyms, and map synonyms to one and the same concept. This further alleviates the tasks of indexing, IR, and search. Once named entities have been identified, systems aim to extract relationships between them from textual evidences; in the biomedical domain, these include gene-disease associations and proteinprotein interactions. Such relations can then be made available for subsequent search in relational databases or used for constructing particular pathways and entire networks [7]. Information extraction (IE) [8][9][10][11] is the extraction of salient facts about prespecified types of events, entities [12], or relationships from free text. Information extraction from free text utilizes shallow-parsing techniques [13], part-ofspeech tagging [14], noun and verb phrase chunking [15], predicate-subject and object relationships [13], and learned [8,16,17] or hand-build patterns [18] to automate the creation of specialized databases. Manual pattern engineering approaches employ shallow parsing with patterns to extract the interactions. In the system presented in [19], sentences are first tagged using a dictionary-based protein name identifier and then processed by a module which extracts interactions directly from complex and compound sentences using regular expressions based on part of speech tags. IE systems look for entities, relationships among those entities, or other specific facts within text documents. The success of information extraction depends on the performance of the various subtasks involved. The Suiseki system of Blaschke et al. [20] also uses regular expressions, with probabilities that reflect the experimental accuracy of each pattern to extract interactions into predefined frame structures. Genies [21] utilizes a grammar-based natural language processing (NLP) engine for information extraction. Recently, it has been extended as GeneWays [22], which also provides a Web interface that allows users to search and submit papers of interest for analysis. The BioRAT system [23] uses manually engineered templates that combine lexical and semantic information to identify protein interactions. The GeneScene system [24] extracts interactions using frequent preposition-based templates. Over the last years, a focus has been on the extraction of protein-protein interactions in general, recently including extraction from full text articles, relevance ranking of extracted information, and other related aspects (see, for instance, the BioCreative community challenge [25]). The BioNLP'09 Shared Task concentrated on recognition of more fine-grained molecular events involving proteins and genes [26]. Both papers give overviews over the specific tasks and reference articles by participants. One of the first efforts to extract information on biomolecular events was proposed by Yakushiji et al. [27]. They implemented an argument structure extractor based on full sentence parses. A list of target verbs have specific argument structures assigned to each. Frame-based extraction then searches for filler of each slot required according to the particular arguments. On an small in-house corpus, they found that 75% of the errors can be attributed to erroneous parsing and another 7% to insufficient memory; both causes might have less impact on recent systems due to more accurate parsers and larger memory. Ding et al. [28] studied the extraction of protein-protein interactions using the Link Grammar parser. After some manual sentence simplification to increase parsing efficiency, their system assumed an interaction whenever two proteins were connected via a link path; an adjustable threshold allowed to cut off too long paths. As they used the original version of Link Grammar, Ding et al. [28] argued that adaptations to the biomedical domain would enhance the performance. An information extraction application analyzes texts and presents only the specific information from them that the user is interested in [29]. IE systems are knowledge intensive to build and are to varying degrees tied to particular domains and scenarios such as target schema. Almost all IE applications start with fixed target schema as a goal and are tuned to extract information from unstructured text that will fit the schema. In scenarios where target schema is unknown, open information extraction systems [30] like KnowItNow [31] and TextRunner [32] allow rules to be defined easily based on the extraction need. An hybrid application (IR + IE) that leverages the best of information retrieval (ability to relevant texts) and information extraction (analyze text and present only specific information user is interested in) would be ideal in cases when the target extraction schema is unknown. An iterative loop of IR and IE with user's feedback will be potentially useful. For this application, we will need main components of IE system (like parts-of-speech tagger, named entity taggers, shallow parsers) preprocesses the text before being indexed by a custom-built augmented index that helps retrieve queries of the type "Cities such as ProperNoun(Head(NounPhrase))." Cafarella and Etzioni Advances in Bioinformatics 3 [33] have done work in this direction to build a search engine for natural language and information extraction applications. Exploratory search [34] is a topic that has grown from the fields of information retrieval and information seeking but has become more concerned with alternatives to the kind of search that has received the majority of focus (returning the most relevant documents to a Google-like keyword search). The research is motivated by questions like "what if the user does not know which keywords to use?" or "what if the user is not looking for a single answer?". Consequently, research began to focus on defining the broader set of information behaviors in order to learn about situations when a user is-or feels-limited by having only the ability to perform a keyword search (source: http://en.wikipedia.org/wiki/Exploratory search). Exploratory search can be defined as specialization of information exploration which represents the activities carried out by searchers who are either [35]: (1) unfamiliar with the domain of their goal (i.e., need to learn about the topic in order to understand how to achieve their goal); (2) unsure about the ways to achieve their goals (either the technology or the process); or even (3) unsure about their goals in the first place. A faceted search system (or parametric search system) presents users with key value metadata that is used for query refinement [36]. By using facets (which are metadata or class labels for entities such as genes or diseases), users can easily combine the hierarchies in various ways to refine and drill down the results for a given query; they do not have to learn custom query syntax or to restart their search from scratch after each refinement. Studies have shown that users prefer faceted search interfaces because of their intuitiveness and ease of use [37]. Hearst [38] shares her experience, best practices, and design guidelines for faceted search interfaces, focusing on supporting flexible navigation, seamless integration with directed search, fluid alternation between refining and expanding, avoidance of empty results sets, and most importantly making users at ease by retaining a feeling of control and understanding of the entire search and navigation process. To improve web search for queries containing named entities [39], automatically identify the subject classes to which a named entity might refer to and select a set of appropriate facets for denoting the query. Faceted search interfaces have made online shopping experiences richer and increased the accessibility of products by allowing users to search with general keywords and browse and refine the results until the desired sub-set is obtained (SIGIR'2006 Workshop on Faceted Search (CFP): http://sites.google.com/site/facetedsearch/). Faceted navigation delivers an experience of progressive query refinement or elaboration. Furthermore, it allows users to see the impact of each incremental choice in one facet on the choices in other facets. Faceted search combines faceted navigation with text search, allowing users to access (semi) structured content, thereby providing support for discovery and exploratory search, areas where conventional search falls short [40]. Approach In an age of ever increasing published research documents (available in search-able textual form) containing amounts of valuable information and knowledge that are vital to further research and understanding, it becomes imperative to build tools and systems that enable easier and quick access to right information the user is seeking for, and this has already become an information overload problem in different domains. Information Extraction (IE) systems provide an structured output by extracting nuggets of information from these text document collections, for a defined schema. The output schema can vary from simple pairwise relations to a complex, nested multiple events. Faceted search and navigation is an efficient way to browse and search over a structured data/document collection, where the user is concerned about the completeness of the search, not just top ranked results. Faceted search system needs structured input documents, and IE systems extract structured information from text documents. By combining these two paradigms, we are able to provide faceted search and navigation over unstructured text documents, and, with this fusion, we are also able to leverage real utility of information extraction, that is, finding hidden relationships as the user goes through a search process, and to help refine the query to more satisfying and relevant level, all while keeping user feel incontrol of the whole search process. We developed BioEve Search (http://www.bioeve.org/) framework to provide fast and scalable search service, where users can quickly refine their queries and drill down to the articles they are looking for in a matter of seconds, corresponding to a few number of clicks. The system helps identify hidden relationships between entities (like drugs, diseases, and genes), by highlighting them using a tag cloud to give a quick visualization for efficient navigation. In order to have sufficient abstraction between various modules (and technologies used) in this system, we have divided this framework into four different layers (refer to Figure 1) and they are (a) Data Store layer, (b) Information Extraction layer, (c) Faceting layer, and (d) Web Interface layer. Next sections explain each layer of this framework in more details. 2.1. Data Store Layer. The Data Store layer preprocesses and stores the documents in an indexed data store to make them efficiently accessible to the modules of upper layer (information extraction layer). Format conversion is needed sometimes (from ASCII to UTF-8 or vice versa), or XML documents need to be converted to text documents before being passed to next module. After the documents are in the required format and cleansed, they are stored in a indexed data store for efficient and fast access to either individual documents or the whole collections. The data store can be implemented using an Indexer service like (Apache Lucene (Lucene: http://lucene.apache.org/) or any database like MySQL). The Medline dataset is available as zipped XML files that needed XML2 text conversion, after which we could ingest them into an indexer, Apache Lucene in our case. Such an indexer allows for faster access and keyword-based text search to select a particular subset of abstracts for further processing. ). The aim of marking such interesting phrases is to avoid looking at the entire text to find participants, as deep parsing of sentences can be a computationally expensive process, especially for the large volumes of text. We intend to mark phrases in biomedical text, which could contain a potential event, to serve as a starting point for extraction of event participants. Section 6.1 gives more details about our experimentations with classification and annotation of biomedical entities. All the classification and annotation were done offline before the annotated articles are indexed for the search as once an article is classified and annotated with different entity types, it does not need to be processed again for each search query. This step can be done preindexing and as a batch process. Shared Schema between IE Layer and Faceting Layer. In order to facilitate indexing and faceting over the extracted semi-structured text articles, both IE layer and faceting layer needs to share a common schema. A sample of shared schema used for enabling interaction between these layers is shown in Scheme 1. The web interface provides following features for interactive search and navigation. The interface presents a number of entities types (on the left panel) along with the specific instances/values, from previous search results, and the current query. Users can choose any of the highlighted values of these entity types to interactively refine the query (add new values/remove any value from the list with just one click) and thereby drill down to the relevant articles quickly without actually reading the entire abstracts. Users can easily remove any of the previous search terms, thus widening the current search. We implemented the BioEve user interface using AJAX (AJAX: http://evolvingweb.github.com/ajaxsolr/), Javascript, and JSON to provide rich dynamic experience. The web interface runs on an Apache Tomcat server. Next section explains about navigation aspect of the user interface. User Interface: A Navigation Guide Search interface is divided into left and right panels, see Figure 2, basically displaying enriched keywords and results, respectively. Left panel: it offers suggestions and insights (based on cooccurrence frequency with the query terms) for different entities types, such as genes, diseases, and drugs/chemicals. (i) Left panel shows navigation/refinement categories (genes, diseases, and drugs); users can click on any of the entity names (in light blue) to refine the search. By clicking on an entity, the user adds that entity to the search and the results on the right panel are refreshed on the y to reflect the refined results. (ii) Users can add or remove any number of refinements to the current search query until they reach the desired results set (shown in the right panel). Right panel: it shows the user's current search results and is automatically refreshed based on user's refinement and navigation choices on the left panel. (i) The top of the panel shows users current query terms and navigation so far. Here, users can also deselect any of the previously selected entities or even all of them by single click on "remove all." By deselecting any entities, user is essentially expanding the search and the results in the right panel are refreshed on the fly to remaining query entities to offer a dynamic navigation experience. (ii) Abstracts results on this panel show "title" of the abstract (in light red), full abstract text (in black, if abstract text is available). (iii) Below the full abstract text, the list of entities mentioned in that abstracts (in light blue) is shown. These entities names are clickable and will start a new search for that entity name, with a single click. (iv) A direct URL is also provided to the abstract page on http://pubmed.gov in case the user wants to access additional information such as authors, publication type, or links to a full-text article. Interactive Search and Navigation: A Walk through and behind the Scenes Let us start an example search process, say with the query "cholesterol" and the paragraph titled "behind-the-scenes" gives details of the computational process behind the action. (1) The autocomplete feature helps in completion of the name while typing if the word is previously mentioned in the literature, which is the case here with "cholesterol." Behind-the-scenes: as user starts typing, the query is tokenized (in case of multiple words) and search is made to retrieve word matches (and not the result rows yet) using the beginning with the characters user has already typed, and this loop continues. Technologies at play are jQuery, AJAX, and faceting feature of Apache Solr. Once the query is submitted by the user, the results rows also contain the annotated entity names and these are used to generate tag clouds, using the faceting classification entity frequency count. The search results in 27177 articles hits (Figure 3). Those are a lot of articles to read. How about narrowing down these results with some insights given by BioEve Search? (2) In left panel, "hepatic lipase" is highlighted; let us click on that as it shows some important relationship between "cholesterol" and "hepatic lipase." The search results are now narrowed down to 195 articles from 27177 ( Figure 4). That is still a lot of articles to read this afternoon, how about some insights on diseases. Behind-the-scenes: once user click on a highlighted entity name in tag cloud, this term (gene: "hepatic lipase") is added to the search filter and the whole search process and tag could be generated again for the new query. You can see disease "hyperthyroidism" highlighted in Figure 5. (3)Selecting "hyperthyroidism" drills results down to 3, as can be seen in Figure 6. The top result is about "Treatment of hyperthyroidism: effects on hepatic lipase, lipoprotein lipase, LCAT and plasma lipoproteins". With few clicks user can refine search results to more relevant articles. Initial User Reviews and Feedback We asked three life science researchers to review and provide feedback on ease of search and novelty of the system, and shown below is their feedback (paraphrased). Their names and other details are removed for privacy purposes. Information Extraction: Annotating Sentences with Biomolecular Event Types. The first step towards bioevent extraction is to identify phrases in biomedical text which indicate the presence of an event. The aim of marking such interesting phrases is to avoid looking at the entire text to find participants. We intend to mark phrases in biomedical text, which could contain a potential event, to serve as a starting point for extraction of event participants. We experimented with well-known classification approaches, from a naïve Bayes classifier to the more sophisticated machine classification algorithms Expectation Maximization, Maximum Entropy, and Conditional Random Fields. Overview of different classifiers applied at different levels of granularity and the features used by these classifiers is shown in Table 1. For naïve Bayes classifier implementation, we utilized WEKA (WEKA: http://www.cs.waikato.ac.nz/ml/weka/) library, a collection of machine learning algorithms for data mining tasks, for identifying single label per sentence approach. WEKA does not support multiple labels for the same instance. Hence, we had to include a tradeoff here by including the first encountered label in the case where the instance had multiple labels. For Expectation Maximization (EM) and Maximum Entropy (MaxEnt) algorithms, we used classification algorithms from MALLET library (MALLET: http://mallet.cs.umass.edu/index.php). Biomedical abstracts are split into sentences. For training purposes, plain text sentences are transformed into training instances as required by MALLET. Feature Selection for Naïve Bayes, EM, and MaxEnt Classifiers. For the feature sets mentioned below, we used the TF-IDF representation. Each vector was normalized based on vector length. Also, to avoid variations, words/phrases were converted to lowercase. Based on WEKA library token delimiters, features were filtered to include those which had an alphabet as a prefix, using regular expressions. For example, features like −300 bp were filtered out, but features like p55, which is a protein name, were retained. We experimented with the list of features described below, to understand how well each feature suits the corpus under consideration. (i) Bag-of-words model: this model classified sentences based on word distribution. (ii) Bag-of-words with gene names boosted: the idea was to give more importance to words, which clearly demarcate event types. To start with, we included gene names provided in the training data. Next, we used the ABNER (ABNER: http://pages.cs.wisc.edu/ ∼bsettles/abner/), a gene name tagger, to tag gene names, apart from the ones already provided to us. We boosted weights for renamed feature "protein", by 2.0. (iii) Bag-of-words with event trigger words boosted: we separately tried boosting event trigger words. The list of trigger words was obtained from training data. This list was cleaned to remove stop words. Trigger words were ordered in terms of their frequency of occurrence with respect to an event type, to capture trigger words which are most discriminative. (iv) Bag-of-words with gene names and event trigger words boosted: the final approach was to boost both gene names and trigger words together. Theoretically, this approach was expected to do better than previous two feature sets discussed. Combination of discriminative approach of trigger words and gene name boosting was expected to train the classifier better. Evaluation of Sentence Level Classification Using Naïve Bayes Classifier. This approach assigns a single label to each sentence. For evaluation purposes, the classifier is tested against GENIA development data. For every sentence, evaluator process checks if the event type predicted is the most likely event in that sentence. In case a sentence has more than one event with equal occurrence frequency, classifier predicted label is compared with all these candidate event types. The intent of this approach was to just understand the features suitable for this corpus. Classifier evaluated was NaiveBayesMultinomial classifier from Weka (http://www.cs .waikato.ac.nz/ml/weka/) library, which is a collection of machine learning algorithms for data mining tasks. Table 2 shows precision results for NBC classifier with different feature sets for single label per sentence classification. Conditional Random Fields Based Classifier. Conditional Random fields (CRFs) are undirected statistical graphical models, a special case of which is a linear chain that corresponds to a conditionally trained finite-state machine [41]. CRFs in particular have been shown to be useful in parts-of-speech tagging [42] and shallow parsing [42]. We customized ABNER which is based on MALLET, to suit our needs. ABNER employs a set of orthographic and semantic features. Feature Selection for CRF Classifier. The default model included the training vocabulary (provided as part of the BIONLP-NLPBA 2004 shared task) in the form of 17 orthographic features based on regular expressions [41]. These include upper case letters (initial upper case letter, all upper case letters, mix of upper and lower case letters), digits (special expressions for single and double digits, natural numbers, and real numbers), hyphen (special expressions for hyphens appearing at the beginning and end of a phrase), other punctuation marks, Roman and Greek words, and 3-gram and 4-gram suffixes and prefixes. ABNER uses semantic features that are provided in the form of handprepared (Greek letters, amino acids, chemical elements, known viruses, abbreviations of all these) and databasereferenced lexicons (genes, chromosome locations, proteins, and cell lines). Evaluation of Sentence Classification Approaches. The framework is designed for large-scale extraction of molecular events from biomedical texts. To assess its performance, we evaluated the underlying components on the GENIA event dataset made available as part of BioNLP'09 Shared Task [26]. This data consists of three different sets: the training set consists of 800 PubMed abstracts (with 7,499 sentences), the development set has 150 abstracts (1,450 sentences), and the test set has 260 abstracts (2,447 sentences). We used the development set for parameter optimization and fine tuning and evaluated the final system on the test set. Employed classifiers were evaluated based on precision and recall. Precision indicates the correctness of the system, by measuring number of samples correctly classified in comparison to the total number of classified sentences. Recall indicates the completeness of the system, by calculating the number of results which actually belong to the expected set of results. Sentence level single label classification and sentence level multilabel classification approaches were evaluated based on how well the classifier labels a given sentence from a test set with one of the nine class labels. Phrase level classification using CRF model was evaluated based on how well the model tags trigger phrases. Evaluating this approach involved measuring the extent to which the model identifies that a phrase is a trigger phrase and how well it classifies a tagged trigger phrase under one of the nine predefined event types. Retrieved trigger phrases refer to the ones which are identified and classified by the CRF sequence tagger. Relevant trigger phrases are the ones which are expected to be tagged by the model. Retrieved and relevant trigger words refer to the tags which are expected to be classified and which are actually classified by the CRF model. All the classifiers are trained using BioNLP shared task training data and tested against BioNLP shared task development abstracts. We compare the above three approaches for classification in Table 3. CRF has a good tradeoff as compared to Maximum Entropy classifier results. As compared to multiple labels, sentence level classifiers, it performs better in terms of having a considerably good accuracy for most of the event types with a good recall. It not only predicts the event types present in the sentence, but also localizes the trigger phrases. There are some entries where ME seems to perform better than CRF; for example, in case of positive regulation, where the precision is as high as 75%. However, in this case, the recall is very low (25%). The reason noticed (in training examples) was that, most of the true example sentences of positive regulation or negative regulation class type were misclassified as either phosphorylation or gene expression. The F1-score for CRF indicates that, as compared to the other approaches, CRF predicts 80% of the relevant tags, and, among these predicted tags, 68% of them are correct. Evaluation of Phrase Level Labeling. Evaluation of this approach was focused more on the overlap of phrases between the GENIA annotated development and CRF tagged labels. The reason being for each abstract in the GENIA corpus, there is generally a set of biomedical entities present in it. For the shared task, only a subset of these entities was considered in the annotations, and accordingly only events concerning these annotated entities were extracted. However, based on the observation of the corpus, there was a probable chance of other events involving entities not selected for the annotations. So we focused on the coverage, where both the GENIA annotations and CRF annotations agree upon. CRF performance was evaluated on two fronts in terms of this overlap. (i) Exact boundary matching: this involves exact label matching and exact trigger phrase match. (ii) Soft boundary matching: this involves exact label matching and partial trigger phrase match, allowing 1-word window on either side of the actual trigger phrase. Checking of the above constraints was a combination of template matching and manually filtering of abstracts. Table 4 gives an estimate of the coverage. Soft boundary matching increases the coverage by around 3%. Table 3 gives the overall evaluation of CRF with respect to GENIA corpus. With regards to the CRF results, accuracy for positive regulation is comparatively low. Also, the test instances for positive regulation were more than any other event type. So this reduced average precision to some extent. A detailed analysis of the results showed that around 3% tags were labeled incorrectly in terms of the event type. There were some cases where it was not certain whether an event should be marked as regulation or positive regulation. Some examples include "the expression of LAL-mRNA," where "LAL-mRNA" refers to a gene. As per examples seen in the training data, the template of the form "expression of <gene name>" generally indicates presence of a Gene expression event. Hence, more analysis may be need to exactly filter out such annotations as true negatives or deliberately induced false positives. Discussion and Conclusions PubMed is one of the most well known and used citation indexes for the Life Sciences. It provides basic keyword searches and benefits largely from a hierarchically organized set of indexing terms, MeSH, that are semi-automatically assigned to each article. PubMed also enables quick searches for related publications given one or more articles deemed relevant by the user. Some research tools provide additional cross-referencing of entities to databases such as UniProt or to the GeneOntology. They also try to identify relations between entities of the same or different types, such as protein-protein interactions, functional protein annotations, or gene-disease associations. GoPubMed [43] guides users in their everyday searches by mapping articles to concept hierarchies, such as the Gene Ontology and MeSH. For each concept found in abstracts returned by the initial user query, GoPubMed computes a rank based on occurrences of that concept. Thus, users can quickly grasp which terms occur frequently, providing clues for relevant topics and relations, and refine subsequent queries by focusing on particular concepts, discarding others. In this paper, we presented BioEve Search framework, which can help identify important relationships between entities such as drugs, diseases, and genes by highlights them during the search process. Thereby, allowing the researcher not only to navigate the literature, but also to see entities and the relations they are involved in immediately, without having to fully read the article. Nonetheless, we envision future extensions to provide a more complete and mainstream service and here are few of these next steps. Keeping the search index up-to-date and complete: we are adding a synchronization module that will frequently check with Medline for supplement articles as they are published; these will typically be in the range of 2500-4500 new articles per day. Frequent synchronization is necessary to keep BioEve abreast with Medline collection and give users the access to the most recent articles. Normalizing and grounding of entity names: as the same gene/protein can be referred by various names and symbols (e.g., the TRK-fused gene is also known as TF6; TRKT3; FLJ36137; TFG), a user searching for any of these names should find results mentioning any of the others. Removal of duplicates and cleanup of nonbiomedical vocabulary that occurs in the entity tag clouds will further improve navigation and search results. Cross-referencing with biomedical databases: we want to cross-reference terms indexed with biological databases. For example, each occurrence of a gene could be linked to EntrezGene and OMIM; cell lines can be linked and enriched with ATCC.org's cell line database; we want to crossreference disease names with UMLS and MeSH to provide access to ontological information. To perform this task of entity normalization, we have previously developed Gnat [6], which handles gene names. Further entity classes that exhibit relatively high term ambiguity with other classes or within themselves are diseases, drugs, species, and GeneOntology terms ("Neurofibromatosis 2" can refer to the disease or gene). Conflict of Interests To the authors knowledge, there is no conflict of interest with name "BioEve" or with any trademarks.
2014-10-01T00:00:00.000Z
2012-05-28T00:00:00.000
{ "year": 2012, "sha1": "9e7a00b48a4083aef2a8bfc02aed08f49330f9a9", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/abi/2012/509126.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10902cf187f8db8baadb2b7167870e97c4d13e9a", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
237215421
pes2o/s2orc
v3-fos-license
Long noncoding RNA landscapes specific to benign and malignant thyroid neoplasms of distinct histological subtypes The main types of thyroid neoplasms, follicular adenoma (FA), follicular thyroid carcinoma (FTC), classical and follicular variants of papillary carcinoma (clPTC and fvPTC), and anaplastic thyroid carcinoma (ATC), differ in prognosis, progression rate and metastatic behaviour. Specific patterns of lncRNAs involved in the development of clinical and morphological features can be presumed. LncRNA landscapes within distinct benign and malignant histological variants of thyroid neoplasms were not investigated. The aim of the study was to discover long noncoding RNA landscapes common and specific to major benign and malignant histological subtypes of thyroid neoplasms. LncRNA expression in FA, FTC, fvPTC, clPTC and ATC was analysed with comprehensive microarray and RNA-Seq datasets. Putative biological functions were evaluated via enrichment analysis of coexpressed coding genes. In the results, lncRNAs common and specific to FTC, clPTC, fvPTC, and ATC were identified. The discovered lncRNAs are putatively involved in L1CAM interactions, namely, pre-mRNA processing (lncRNAs specific to FTC); PCP/CE and WNT pathways (lncRNAs specific to fvPTC); extracellular matrix organization (lncRNAs specific to clPTC); and the cell cycle (lncRNAs specific to ATC). Known oncogenic and suppressor lncRNAs (RMST, CRNDE, SLC26A4-AS1, NR2F1-AS1, and LINC00511) were aberrantly expressed in thyroid carcinomas. These findings enhance the understanding of lncRNAs in the development of subtype-specific features in thyroid cancer. www.nature.com/scientificreports/ (originates from the promoter region of a protein-coding gene with transcription proceeding in the opposite direction on the other strand); and 3-prime overlapping (overlap the 3′UTR of a protein-coding locus on the same strand). Today, the number of annotated lncRNA genes has reached 14 720 according to Ensembl version 93 9 . In thyroid cancer, several lncRNAs have been shown to have pathogenic and predictive roles, including BANCR, FALEC, CNALPTC1, PVT1, NAMA, PTCSC1, PTCSC2, PTCSC3, and TNRC6C-AS1 [10][11][12][13][14][15][16][17][18][19][20][21] . However, all of the studies to date have considered only PTC, and mostly none of the previous works takes into account the difference between clPTC and fvPTC. There are no published studies describing the landscapes of lncRNAs in ATC, FTC and FA. Nevertheless, lncRNAs differentially expressed in ATC may reflect anaplastic features and be strong prognostic factors. As the morphology and behaviour of FTC differ from those of PTC, it is proposed that the landscape of lncRNAs in FTC may be different from that of PTC. Investigation of lncRNAs common and specific to FA and FTC is important in understanding their relations and revealing differential diagnostic markers. This study aimed to identify lncRNAs specific and common to the main types of thyroid neoplasms (FA, FTC, fvPTC, clPTC and ATC). The expression data from microarray technology (8 datasets) and RNA-Seq technology (the PRJEB11591 dataset and TCGA transcriptome data) were analysed. Results LncRNAs differentially expressed in thyroid neoplasms. LncRNA expression was evaluated in the main histological subtypes of thyroid neoplasms, FA, FTC, fvPTC, clPTC, and ATC, compared to those in NT. The expression of 3910 lncRNA genes in the microarray dataset, 2587 in the RNA-Seq PRJEB11591 dataset and 3009 in the RNA-Seq TCGA dataset was analysed. The number of genes analysed corresponded to the total number of lncRNAs covered by uniquely mapped probes in the Affymetrix Human Genome U133 Plus 2.0 Array and the number of lncRNAs yielded after filtration by low number of counts for RNA-Seq datasets. The numbers of lncRNAs found to be differentially expressed in each subtype compared to NT are presented in Table 1. The complete lists of the identified differentially expressed lncRNAs are shown in Supplementary files 1-3. Volcano plots representing the distribution of fold change and adjusted p values in the studied histological subtypes are shown in Supplementary file 4. Hierarchical clustering of differentially expressed lncRNAs in the microarray and PRJEB11591 datasets is presented in Fig. 1. There was strong clustering of ATC, clustering of clPTC and weak clustering of fvPTC lncR-NAs. No clustering of lncRNAs within the FTC or FA groups was observed (Fig. 1). Via cross-dataset confirmation, 116 genes in clPTC were validated (45 genes found in all analysed datasets, 71 genes without probes in the microarray were found in both RNA-Seq datasets; Fig. 2A), and 62 genes in fvPTC were validated (Fig. 2B). These genes can be considered to have robustly differentially expressed lncRNAs. There are no datasets available for performing cross-dataset validation of FA, FTC or ATC genes. LncRNAs common and specific to each histological subtype were detected via intersection of the genes expressed differentially in each subtype compared to NT, and subsequent selection of lncRNAs validated in clPTC and fvPTC, and significantly differentially expressed in comparison between subtypes of neoplasms (Figs. 3, 4). LncRNAs common to FA and WDTC. Of the 35 lncRNAs found to be differentially expressed in FA and WDTC (FTC, clPTC, and fvPTC) compared to NT, 13 genes were cross validated in clPTC and fvPTC (Figs. 3, 4, Table 2). The expression of LINC02555 and LINC02471 increased during the progression from adenoma to carcinomas and was significantly higher in fvPTC and clPTC than in FA. The expression of ENSG00000256542 and ENSG00000258117 decreased during the transition from FA to carcinomas and was significantly lower in fvPTC and clPTC than in FA or FTC. LncRNAs common to WDTC. There were 32 lncRNAs differentially expressed in all the studied histological subtypes of WDTC (FTC, clPTC, and fvPTC) but not in FA (Fig. 3). Of these lncRNAs, 6 lncRNAs were validated to be significantly differentially expressed in clPTC and fvPTC compared to FA (Fig. 4, Table 3). None of the 32 lncRNAs were differentially expressed in FTC compared to FA. LncRNAs common to papillary carcinomas. There were 22 genes differentially expressed in both clPTC and fvPTC, but not in FA or FTC (Fig. 3), validated and significantly differentially expressed compared to FA and FTC (Fig. 4, Table 4)-lnRNAs are putatively associated with papillary features in thyroid carcinomas. Table 5). However, none of these lncRNAs was differentially expressed compared to those in FA. Of the 29 genes differentially expressed in fvPTC but not in other differentiated carcinomas or FA (Figs. 3, 4), only the ENSG00000257647 gene was specific to fvPTC-validated and significantly differentially expressed in fvPTC compared to FA, FTC and clPTC. The 32 genes were found to be differentially expressed in clPTC but not in other differentiated carcinomas or FA, validated, and significantly differentially expressed compared to those in fvPTC, FTC and FA-lncRNAs specific to clPTC (Figs. 3, 4, Table 6). LncRNA specific to ATC . ATC samples were available only in the microarray dataset, which also included two variants of PTC. Of the 376 lncRNAs differentially expressed in ATC compared to NT, 252 were not differentially expressed in the other investigated histological subtypes, and 185 were significantly differentially expressed compared to those in clPTC and fvPTC-lncRNAs specific to ATC. The 30 most differentially expressed genes are presented in Table 7, and the full list is shown in Supplementary file 5. Potential biological functions of aberrantly expressed lncRNAs. The coexpressed genes for each lncRNA from the top 5 most differentially expressed list for discussed groups were identified. The number of coexpressed genes was 138.5 (46.25 − 256.5) −median (Q1 − Q3). Analysis of the enrichment of GO biological processes and GO molecular functions, KEGG, and Reactome terms with coexpressed coding genes allowed us to identify putative pathways involving common and specific www.nature.com/scientificreports/ lncRNAs (Fig. 5). The main functions of the lncRNAs common to FA and WDTC-cancers (Colorectal, Nonsmall cell lung, Thyroid) and p53 signalling; functions of the lncRNAs common to WDTC-Pathways in cancer and L1CAM interactions; functions of the lncRNAs common to papillary carcinomas-aldehyde dehydrogenase (NAD) activity; functions of the lncRNAs specific to FTC-processing of capped intron-containing pre-mRNA; functions of the lncRNAs specific to fvPTC-PCP/CE pathway and Beta-catenin independent WNT signalling; functions of the lncRNAs specific to clPTC-extracellular matrix organization and endoderm formation; and functions of the lncRNAs specific to ATC-cell cycle and mitotic processes. Discussion Histological subtypes of follicular cell-derived thyroid carcinomas (FTC, PTC, and ATC) significantly differ in their mutational landscapes and clinical characteristics. Although FTC and clPTC are both WDTCs, FTC is characterized by a follicular growth pattern and tends more often to spread as metastases to distant organs, while clPTC typically has papillary architecture and spreads more often to lymph nodes in the neck. In FTC, K/H/ NRAS and PAX8/PPARG mutations are prevalent, whereas BRAF mutations and tyrosine kinase fusions prevail in clPTC 1 . The clinical characteristics of fvPTC are intermediate; fvPTC is composed of neoplastic follicles not papillae, but with follicular cells showing nuclear features typical of PTC 22 . The mutational profile of fvPTC is most similar to that of FTC: both exhibit a prevalence of K/H/NRAS and PAX8/PPARG mutations. In a previous TCGA study, fvPTC was characterized as a Ras-like tumour, and its classification as a papillary carcinoma was questioned 23 . Recently, reclassification of encapsulated fvPTC as a "noninvasive follicular thyroid neoplasm with papillary-like nuclear features" (NIFTP) was proposed 2 . ATC is an advanced stage thyroid neoplasm and is the Figure 2. In silico validation of differentially expressed lncRNAs in clPTC (A) and fvPTC (B). In clPTC, 116 genes were considered to be validated (differentially expressed in all datasets or differentially expressed in both RNA-Seq datasets but absent in microarray probes). In fvPTC, 62 genes were considered to be validated. www.nature.com/scientificreports/ most aggressive thyroid cancer. It is expected that there are specific molecular features, including lncRNA patterns, associated with the clinical and histological features of WDTC and the aggressive behaviour of ATC. FA is thought to be a benign counterpart of FTC, and understanding the common and different molecular features of these neoplasms is important for the development of diagnostic and therapeutic strategies. In this study, the expression of lncRNAs was evaluated in the main histological subtypes of thyroid neoplasms: FA, FTC, fvPTC, clPTC and ATC. Datasets analysed in the study (a microarray dataset of 8 independent experiments; RNA-Seq PRJEB11591; and RNA-Seq TCGA) allowed us to perform robust cross-dataset validation of the results for clPTC and fvPTC and to include representative sets of FA, FTC and ATC samples. LncRNA landscapes in FA, FTC and ATC were analysed for the first time. The highest number of genes aberrantly expressed compared to normal thyroid tissue were found in ATC (330 lncRNAs), followed by clPTC, FTC and fvPTC, which reflects www.nature.com/scientificreports/ the more advanced stage of ATC. Since the data for ATC, FA and FTC wer limited with one dataset only, the results for these subtypes are preliminary. Intersection of the differentially expressed lncRNAs and subsequent comparison of the expression between subtypes of neoplasms led to the discovery of lncRNAs common to FA and WDTC (13 genes), common to WDTC (6 genes), common to classical and follicular variants of PTC (22 genes), and specific to FTC (19 genes), fvPTC (1 gene), clPTC (32 genes), and ATC (185 genes). The discovered lncRNAs were proposed to be involved in the development of clinical and morphological features of the studied subtypes. Putative biological processes involving common and specific lncRNAs were identified. LncRNAs common to all studied thyroid neoplasms (including FA) and common to WDTC appear to be involved in carcinogenesis in different locations. L1CAM interactions found in this study to involve lncRNAs common to WDTC have been previously associated with well-described roles in tumour progression, metastases and the epithelial-to-mesenchymal transition 24,25 . LncRNAs common to follicular and classical variants of papillary carcinoma (associated with papillary histology) are involved in aldehyde dehydrogenase (NAD) activity. Aldehyde dehydrogenase is known to maintain cancer stem cell properties in various cancers, including the thyroid 26 . Biological processes involving lncRNAs specific to FTC include processes that are associated with splicing (Processing of Capped Intron-Containing Pre-mRNA, mRNA Splicing, and RNA processing). Accumulating evidence suggests that aberrant RNA splicing is a common and driving event in cancer development and progression. For instance, oncogenic Ras signalling via the ERK and PI3-K/Akt pathways regulates the phosphorylation of splicing factors such as SRSF1, SRSF7, and SPF45 and drives the switching of active and inactive states of tumour promoters and suppressors (MST1R, FAS, CD44, LBR, Casp-9, KLF6, and others) via alternative splicing 27,28 . None of the lncRNAs specific to FTC were differentially expressed compared to those in FA. The absence of lncRNAs differentially expressed in FTC and FA corresponds to the commonality of these subtypes and frequent difficulty in cytology-based differential diagnostics. Only lncRNA ENSG00000257647 is specific to fvPTC, which might be explained by its intermediate morphology with features of both papillary and follicular carcinomas, leading to its debatable classification. LncRNA ENSG00000257647 appeared to be involved in WNT signalling, predominantly through the www.nature.com/scientificreports/ Beta-catenin-independent WNT pathway (especially, planar cell polarity that modulates cytoskeleton rearrangements through the activation of the small GTPases RhoA and Rac and their downstream effectors Rock and JNK). WNT signalling is known to play a crucial role in thyroid carcinogenesis, and several mechanisms of its deregulation have been described, including inhibition of the β-catenin degradation complex via its phosphorylation by RET/PTC, inhibition of E-cadherin expression through the MAPK/ERK pathway activated by BRAF mutations, and activation of both canonical and non-canonical Wnt pathways by RAS mutations 29,30 . LncRNAs specific to clPTC are involved in extracellular matrix organization and endoderm and collagen formation. Extracellular matrix (ECM) disorganization is known to play a pivotal role in cancer initiation and progression. The major driving mutation in clPTC is BRAF p.V600E, and there is emerging evidence of ECM remodelling induced by BRAF p.V600E in PTCs 31 . Notably, it has been previously shown that the extracellular matrix of PTCs driven by BRAF p. V600E (but not mutant HRAS) is enriched with stromal-derived fibrillar collagen and facilitates cancer progression 32 . lncRNAs specific to ATC are probably associated with its anaplastic features and aggressive behaviour. For these lncRNAs, there is a strong enrichment of cell cycle and mitotic pathways which possibly reflects the involvement of these lncRNAs in the loss of differentiation and high proliferation rate characteristic of ATC. Of lncRNAs previously described in thyroid cancer, we found that PTCSC3 was downregulated in all investigated neoplasms, including FA; TNRC6C-AS1 was upregulated in papillary carcinomas; and PVT1 was specifically upregulated in ATC 11,17,18 . Other lncRNAs previously described in thyroid malignancy (BANCR, NAMA, CNALPTC1, FALEC, and PTCSC2) were not identified in our study 10,12,13,16,20,21 . A possible explanation is the strong association of these lncRNAs with specific mutations and the heterogeneity of driving mutations within the same subtype; for example, the aberrant expression of BANCR is driven by BRAF mutation. The aberrant expression of lncRNAs with Ensembl annotation found by Liyanarachchi et al. (2016) in PTC was confirmed in our study 14 . Most of these lncRNAs were common to thyroid neoplasms (including FA) or common to classical and follicular variants of PTC. No lncRNAs found in our study to be subtype specific were discovered by Liyanarachchi. www.nature.com/scientificreports/ In the present study, we identified some lncRNAs with known roles in tumorigenesis but not previously described in thyroid cancer [33][34][35][36][37] . The identified upregulated promoters of cancer progression included NR2F1-AS1 and LINC00511 in clPTC and CRNDE in ATC; downregulated tumour suppressors SLC26A4-AS1 in clPTC and RMST in ATC. Conclusion LncRNAs common to FA and WDTC, common to WDTC, common to carcinomas with papillary features, and specific to clPTC, fvPTC, FTC and ATC were discovered in the analysis performed with the most comprehensive datasets (combination of a microarray dataset and two RNA-Seq datasets). The similarity of the lncRNA landscapes in FTC and FA was revealed. The results showed that LncRNAs common to FA and WDTC and common to WDTC are involved in pathways in cancer at various sites, p53 signalling and L1CAM interactions; lncRNAs common to papillary carcinomas are involved in aldehyde dehydrogenase (NAD) activity; lncRNAs specific to FTC are involved in mRNA processing; a lncRNA specific to fvPTC is involved in planar cell polarity and WNT signalling; lncRNAs specific to clPTC are involved in extracellular matrix organization and endoderm formation; and lncRNAs specific to ATC are involved in the cell cycle and mitotic processes; and LncRNAs found to be specific to ATC, including CRNDE and RMST, are likely associated with cancer aggressiveness and cancer progression. Materials and methods Microarray datasets. The microarray datasets obtained from Affymetrix Human Genome U133 Plus 2.0 Array (Platform GPL570) were originally selected from the GEO database. The following datasets were included: GSE3467, GSE60542, GSE35570, GSE76039, GSE53157, GSE33630, GSE65144, and GSE29265. A total of 107 samples of normal tissue (NT) and 32 fvPTC, 48 clPTC, and 49 ATC samples were analysed. CEL files were downloaded, and normalization was performed using the gcrma R package. Microarray probes were annotated with Ensembl version 93 using the biomaRt package 38 . www.nature.com/scientificreports/ sample group) were eliminated, and TMM normalization (edgeR package) and the voom method using the limma R package were applied. In the TCGA transcriptome data, 58 NT, 356 clPTC and 101 fvPTC were selected. Samples of metastases and other minor histological subtypes were excluded. Raw counts (HTSeq-Counts Workflow Type, briefly, STAR 2-pass alignment followed by gene expression count assessment with HTSeq) were downloaded from Genomic Data Commons Data Portal (GDC, https:// portal. gdc. cancer. gov/). Genes with low counts (less than 1 count in number of samples exceeding the size of smallest sample group) were eliminated, followed by TMM normalization (edgeR package) and voom analysis with limma 42 . Selection of lncRNA genes. Protein-coding genes and genes attributed to Havana biotypes not related to lncRNAs were eliminated. Genes of the following Havana biotypes were included in the analysis: lincRNA, antisense, 3-prime overlapping ncRNA, bidirectional promoter lncRNA, misc RNA, processed transcript, sense intronic, and sense overlapping. Statistical analysis. To identify differentially expressed lncRNAs, linear modelling using the limma package was performed 43 . Genes with FDR adjusted p value ≤ 0.01 and fold change (FC) ≥ 2.0 were considered to be differentially expressed. A heat map analysis of differentially expressed genes was performed using coolmap limma. Validation. For clPTC and fvPTC, the sets of genes found to be significantly differentially expressed in a previous step in the microarray, RNA-Seq PRJEB11591, and RNA-Seq TCGA datasets were processed with intersection. Genes found in all three datasets and genes found in both RNA-Seq datasets but not in microarray probes were considered validated. Selection of lncRNAs common and specific to histological subtypes. LncRNAs common and specific to FA and WDTC were selected via the intersection analysis of genes found to be significantly differentially expressed in each subtype compared to NT in the RNA-Seq dataset PRJEB11591 and subsequent application following criteria: -Common to FA and WDTC-confirmed through validation for clPTC and fvPTC; -Common to WDTC-confirmed through validation for clPTC and fvPTC, and significantly differentially expressed compared to FA; -Common to papillary carcinomas-confirmed through validation for clPTC and fvPTC, and significantly differentially expressed compared to FA and FTC; -Specific to clPTC, fvPTC, FTC-confirmed through validation for clPTC and fvPTC (not applied to FTC), and significantly differentially expressed compared to each studied subtype; LncRNAs specific to ATC were selected from intersection with genes found in FA and WDTC with subsequent filtration of genes significantly differentially expressed compared to those in clPTC and fvPTC. Evaluation of potential biological functions. To identify genes positively and negatively coexpressed with 5 most differentially expressed lncRNAs, pairwise Pearson correlation between the lncRNAs and all the genes was calculated using the RNA-Seq PRJEB11591 dataset (for FA, FTC, fvPTC and clPTC) and the microarray dataset (for ATC). Genes with an absolute r ≥ 0.7 and a significant correlation (p value < 0.05) were considered to be coexpressed. For coexpressed genes, enrichment of Gene Ontology (GO) Biological Process (2018), GO Molecular Function (2018), Kyoto Encyclopedia of Genes and Genomes (KEGG, 2019) and Reactome (2016) terms was estimated using Enrichr 44,45 . Terms with adjusted p values in Fisher's exact test ≤ 0.05 were considered significantly enriched [42][43][44][45][46] .
2021-08-20T06:17:26.208Z
2021-08-18T00:00:00.000
{ "year": 2021, "sha1": "fd92e294b1369131564fee825386b409d5e5d55a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-96149-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db4d0d1609ff05761a6823da2f3956a22895bbec", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
5070950
pes2o/s2orc
v3-fos-license
Thinking big with small molecules Synthetic chemistry has enabled scientists to explore the frontiers of cell biology, limited only by the laws of chemical bonding and reactivity. As we investigate biological questions of increasing complexity, new chemical technologies can provide systems-level views of cellular function. Here we discuss some of the molecular probes that illustrate this shift from a “one compound, one gene” paradigm to a more integrated approach to cell biology. Since the alkaloid colchicine was described as a mitotic poison in 1934 (Lits, 1934), chemical probes have played an integral role in cell biological research. Laboratories are commonly stocked with the transcriptional inhibitor actinomycin D, the translational blocker cycloheximide, and other "workhorse" small-molecule modulators that target key cellular processes. More specialized aspects of cell function are also routinely studied with pharmacological agents, ranging from the adenylate cyclase activator forskolin to the mTOR inhibitor rapamycin. The rapid and often reversible actions of these chemical tools make them valuable complements to genetic technologies, and over the past several decades, chemistry-driven cell biology has evolved into what is now commonly referred to as chemical genetics. Chemical genetics shares several parallels with its nucleic acid-based namesake. Forward chemical genetic screens survey the proteome with diverse compound libraries, just as mutagenesis screens strive to stochastically generate genomic alterations. Reverse chemical genetics interrogates protein function through specific, small-molecule modulators, akin to the targeted disruption of individual genes by CRISPR/Cas9 and RNA interference reagents. Chemical genetic screens can also identify pharmacological suppressors or enhancers of existing phenotypes. Moreover, the single-nucleotide resolution of genetic technologies has inspired efforts to match that precision with chemical probes. Vast collections of natural products and structurally diverse synthetic reagents have been assembled with the hope of identifying one or more specific modulators for each gene product. This reductionist "one compound, one gene" focus has resonated with researchers interested in chemical mechanisms. Indeed, the beauty of deconstructing a cellular process Correspondence to James K. Chen: jameschen@stanford.edu. into individual molecular interactions or reactions has attracted many chemists to the biological sciences. However, the sequencing of entire genomes has shown that even a comprehensive list of individual parts and annotated functions yields an incomplete picture of cell biology. A more holistic view requires an understanding of how systems with shared structural elements, chemical reactivity, and/or spatiotemporal dynamics contribute to emergent properties. Several genetic and biochemical technologies have been developed to capture this "big picture." RNA sequencing can provide a global perspective of the transcriptomes associated with specific cell states, and ribosomal profiling can reveal which of these transcripts are actively translated at a particular moment. Various regimens of biochemical cross-linking, immunoprecipitation, and sequencing have been used to obtain genome-wide views of protein-nucleic acid interactions and chromatin structure. Central to these methods is the facility with which DNA and RNA can be modified, amplified, and analytically characterizedtechnological advances that have been largely limited to nucleic acids. As a result, we are comparatively in the dark about how proteins and other biomolecules function at the systems level. Illuminating their collective activities in cells will require techniques that can target them in a structure-or reaction-dependent manner, and innovation at the chemistry-biology interface can help us meet this challenge. Several recent advances illustrate the power of thinking big with small molecules. One of the breakthroughs in this approach is activity-based protein profiling, in which substratelike probes are used to covalently tag specific enzyme classes for visualization or purification ( Fig. 1 A; Cravatt et al., 2008). By treating live cells or cell lysates with these reagents, one can gain global insights into enzyme families that recognize a given substrate. For example, biotinylated long-chain fluorophosphonates have been used to obtain serine hydrolyase activity signatures for specific tissue types, exploiting the general reactivity of these suicide substrates toward this enzyme class (Liu et al., 1999). Through this approach, expression of the membraneassociated serine hydrolyase KIAA1363 was found to strongly correlate with cancer cell invasiveness (Jessani et al., 2002). Other nucleophile-containing enzyme families can be targeted by electrophilic reagents, including cysteine proteases, Synthetic chemistry has enabled scientists to explore the frontiers of cell biology, limited only by the laws of chemi cal bonding and reactivity. As we investigate biological questions of increasing complexity, new chemical technol ogies can provide systemslevel views of cellular function. Here we discuss some of the molecular probes that illus trate this shift from a "one compound, one gene" para digm to a more integrated approach to cell biology. Thinking big with small molecules Karen Mruk 1,2 and James K. Chen 1,2 JCB • volume 209 • numBer 1 • 2015 assessed using azide-or alkyne-based dimedone analogues, which selectively react with the redox-sensitive intermediate cysteine sulfenic acid (Fig. 1 C ;Paulsen and Carroll, 2013). Protein prenylation and fatty acylation can be chemically monitored through the metabolic incorporation of azide-or alkynefunctionalized lipids into proteins (Hannoush and Sun, 2010). Metabolic labeling with peracetylated azido-N-acetylglucosamine (GlcNAc) also enables the profiling of O-GlcNAc-modified proteins (Prescher and Bertozzi, 2006). By coupling these methods with bio-orthogonal tagging and mass spectrometry-based sequencing, whole collections of posttranslationally modified proteins have been characterized. Importantly, these techniques have also identified several proteins previously unknown to bear these functional groups. How these populations vary between cell types or change in response to specific perturbations can then be readily assessed. Chemical approaches can even enhance our "big picture" view of DNA and RNA regulation by targeting nucleic acid modifications that are challenging to discern through deubiquitinases, kinases, and glycosidases. Photoreactive groups can extend activity-based protein profiling to an even broader spectrum of targets. Many of these probes can be applied in live cells, taking advantage of azide/alkyne cycloaddition, azide/ phosphine ligation, or other bio-orthogonal tagging chemistries that do not cross-react with endogenous molecules (Patterson et al., 2014). Chemical methods have also been devised to tackle the converse systems-level question: what is the ensemble of substrates for a given enzymatic activity? Using the engineered enzyme subtiligase to biotinylate free protein N termini, 333 caspase-like cleavage sites within 292 protein substrates were identified in apoptotic cells (Fig. 1 B; Mahrus et al., 2008). A cyclin-dependent kinase 1 (Cdk1/cyclin B) mutant and complementary ATP--S analogue were also used to thiophosphatelabel substrates for subsequent covalent capture, revealing >70 direct Cdk1 targets (Blethrow et al., 2008). A variety of other posttranslational alterations can now be comprehensively surveyed in cells by deploying chemical probes with unique reactivities. Protein oxidation states can be more accessible to biologists; (3) complementary genomic resources that make complex biological systems more accessible to chemists; and (4) scientific journals and conferences that support the chemical biology community as a whole. Building upon this logistical framework, it will be important to navigate cultural differences between the two communities. In many respects, chemists and biologists speak different "languages" that stem from distinct scientific traditions and training practices. "Bilingual" scientists who have had substantive experiences in both disciplines will play important roles in bridging the two. By combining an in-depth knowledge of chemistry and a sophisticated understanding of cell biology, we can realize a vision that neither perspective can achieve alone. genetic techniques alone. For instance, the bacteriophage enzyme -glucosyltransferase has been used selectively couple glucose to 5-hydroxymethylcytosine, a common DNA oxidation product that may have functional roles in development and disease. By comparing TET protein-assisted bisulfite sequencing of the glucose-modified DNA to traditional bisulfite sequencing results, 5-hydroxymethylcytosine sites associated with specific cellular states can be determined with genome-wide, single-base resolution (Yu et al., 2012). Our ability to comprehensively characterize RNA-protein interactions through UV cross-linking, immunoprecipitation, and sequencing has similarly benefited from synthetic reagents. In this case, metabolic labeling of cellular RNAs with 4-thiouridine has been found to dramatically increase cross-linking efficiency and RNA recovery, as well as generate distinguishing thymine-to-cytosine transitions at the site of 4-thiouridine-protein coupling (Hafner et al., 2010). Finally, it is worth noting a recent chemical strategy that targets biomolecules according to their subcellular localization rather than their inherent chemical properties. The segregation of cellular components into functional ensembles has typically been studied through biochemical fractionation or cell imaging, and the two techniques have complementary strengths and limitations. In comparison, this new proteomic mapping method combines the best of both worlds: proteome-wide detection and spatiotemporal resolution in live cells. By targeting an engineered form of ascorbate peroxidase (APEX) to the mitochondrial matrix and then pulse-treating the cells with biotin-phenol and hydrogen peroxide, proteins within that compartment have been selectively biotinylated through phenoxyl radical coupling (Fig. 1 D; Rhee et al., 2013). This approach identified 495 components of the mitochondrial matrix proteome, including 31 factors not previously associated with mitochondria. Because APEX is active in all subcellular domains, this chemical mapping procedure should be generally applicable to other organelles of interest. The advances cited here are far from a comprehensive list, but they illustrate the exciting new capabilities and insights that can be obtained when chemical approaches are applied to questions of cell biology. Moreover, they demonstrate how chemistry can go beyond the targeted perturbation of individual gene products and capture the complexity that underlies cellular behavior. Future challenges include the development of new molecular probes to more broadly cover the chemical space within cells. These include technologies that facilitate the profiling of chromatin marks, three-dimensional genomic architectures, nonenzymatic protein families, protein-glycan interactions, or cellular metabolites, to name a few. Methods that are amenable to live-cell analyses or even in vivo applications will be particularly valuable. Achieving these goals will require the collaborative efforts of chemists and biologists. Fortunately, this partnership has been fostered by recent developments, including: (1) new PhD programs at the chemistry-biology interface, particularly those that provide research opportunities spanning synthetic chemistry, cell biology, and animal models; (2) institutional compound screening and synthesis services that make chemistry
2016-05-12T22:15:10.714Z
2015-04-13T00:00:00.000
{ "year": 2015, "sha1": "0ae4e3f6ad222c6437253d0485c7860c425cac2d", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/209/1/7/951115/jcb_201501084.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0ae4e3f6ad222c6437253d0485c7860c425cac2d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
147799865
pes2o/s2orc
v3-fos-license
CATHOLIC ENLIGHTENMENT FOR CHILDREN . TEACHING RELIGION TO CHILDREN IN THE HABSBURG EMPIRE FROM JOSEPH II TO THE RESTORATION Ilustración católica para niños During Joseph II’s reign a deep cultural shift took place within the intellectual and religious establishment, with the acceptance of philosophical and pedagogical ideas that bore a distinctive Enlightenment and Protestant stamp. This cultural shift was applied to the teaching of religion by some relevant figures of the episcopal and pedagogical elites (J. A. Gall, F.M. Vierthaler, F. de Paula Gaheis, J. M. Leonhard). New handbooks and textbooks of the catechism were written which introduced new dialogic methods, more narrative, and borrowed Rochow’s typology of moral short stories. The content of Bishop Gall’s books was heavily rationalistic, whereas subsequent texts tried to balance reason and faith. Vierthaler, Gaheis, and Leonhard used a language that was more suitable for children and closer to the New Testament, with the use of parables and short stories. The so-called Socratic method was used in different ways by these authors. In the age of the Restoration, in spite of the process of school confessionalization, the heritage of the spirit of Enlightenment was still present, since by law the pedagogy taught in the Empire’s academic chairs and teacher training courses was the one defined by Milde, which bore a Kantian imprint, and stressed the importance of developing inner moral law in pupils. Leonhard was a follower of Milde, and his catechism, eventually approved for elementary schools for decades, bore this stamp. So at the end of the eighteenth and the beginning of the nineteenth centuries a new way of teaching religion was introduced, debated and contested in Historia y Memoria de la Educación, 4 (2016): 49-84 Sociedad Española de Historia de la Educación ISSN: 2444-0043 DOI: 10.5944/hme.4.2016.15629 * Department of Education. Faculty of Educational Sciences. Catholic University of the Sacred Heart. Largo Gemelli, 1, 20123 Milano. Italy. simonetta.polenghi@unicatt.it n Simonetta Polenghi Historia y Memoria de la Educación, 4 (2016): 49-84 50 Habsburg Catholic territories. Rousseau’s and Salzmann’s theories were discussed; rationalism and faith, natural religion and revelation were confronted. In the end more attention was devoted to child psychology and language. The cultural fracture caused by Josephinism became less severe: orthodoxy was restored, but new pedagogical ideas actually entered the teaching of religion. In the age of the Restoration, in spite of the process of school confessionalization, the heritage of the spirit of Enlightenment was still present, since by law the pedagogy taught in the Empire's academic chairs and teacher training courses was the one defined by Milde, which bore a Kantian imprint, and stressed the importance of developing inner moral law in pupils.Leonhard was a follower of Milde, and his catechism, eventually approved for elementary schools for decades, bore this stamp. INTRODUCTION In the age of the Enlightenment, Catholicism came under strong attack from philosophical and pedagogical ideas.In the Austria of Maria Theresa, anti-Jesuit feelings were widespread among intellectuals, while rationalistic and naturalistic theories from France and from Germany gained ground.But it was during Joseph's reign (1780-90) that a deeper cultural shift took place in a key part of the intellectual and religious establishment, when the katholische Aufklärung, already in evidence since Charles VI's reign, rapidly developed with the Emperor's ecclesiastical policy.The katholische Aufklärung, which came from Ludovico Antonio's idea of a «regulated devotion», and from Febronian and Jansenist theology, led to the acceptance of philosophical and pedagogical ideas that bore a clear Enlightenment and Protestant stamp. 1 Already several of Maria Theresa's advisers were Protestant converts or had studied in Protestant Universities.Insistence on rationalism, on individual freedom, on the natural foundations of religion, on the priority of ethics over dogmatics led to the recognition of religious tolerance (Toleranzpatent 1781), but also to the intrusion of the State into the Church's affairs.Joseph wanted to reform the Church, but he did so as a lay monarch.He banned the papal bulls Unigenitus, which condemned Jansenism, and In coena Domini, which asserted the pope's right to depose lay rulers; he also suppressed contemplative orders, monasteries and religious brotherhoods, founded new dioceses and parishes; dismantled baroque piety, changed the liturgy , forbade burials within churches and prescribed funeral rules which were meant to be economic and hygienic, but which removed all signs of piety (the corpse had to be sewn in a linen sack, put in a wooden coffin, transported during the night, with no mourner accompanying, to cemeteries well beyond the suburbs of cities, and thrown into mass graves, without the coffin, which then had to be re-used).Many of these rules aroused so much discontent, that his successor Leopold II (1790-92) had to suspend them (Joseph himself was forced to permit single-use coffins again in 1785, for fear of a popular uprising, as well as having to soften some regulations about traditional forms of piety). 3Whereas the movement to «reform Catholicism» had started long before Joseph, with Joseph's ecclesiastical policy the lines between Catholic reform and heresy seemed to the Church establishment to become more blurred.Already by 1781, a year after Maria Theresa's death, there was talk of the Emperor's possible excommunication.The fear of a schism prompted Pius VI to take the dramatic decision to travel to Vienna.Nonetheless, if the papal visit in 1782 aroused the people's enthusiasm, it did little to alter Joseph's views. 4key element of his reform was the creation of general seminaries for the education of the clergy, run by the State.His brother Leopold closed these down, re-opening episcopal ones. 5However, seminarians had to attend lessons in pedagogy and catechetics at a Normalschule or Hauptschule and the majority of seminarians still studied theology at the University of Vienna, where the professors were Josephinists.They reduced Catholic religion to ethics (Sittenlehre), and minimized the transcendent dimension, highlighting the pedagogical aspect of Historia y Memoria de la Educación, 4 (2016): 49-84 pastoral and catechetics teaching 6 .As Grand Duke of Tuscany, Leopold himself had backed the Febronian attempt of reform against papal supremacy. 7Febronian and Jansenist ideas were still present at the turn of the century. With regard to school policy, Joseph dismissed Ignaz Felbiger, architect of the school reform of 1774, which had prescribed compulsory schooling for both boys and girls aged 6-12 and had introduced the Normal method of teaching, which Felbiger had taken from Berlin's pietists Heckerand Hähn.Whereas the traditional way of teaching in elementary schools was individual, the Normal method was whole class instruction, with the teacher explaining or reading to the entire class at the same time.Tools like the blackboard and new books were necessary, as well as the precondition that pupils had the same level of knowledge.A strong emphasis was put on a rational way of teaching and on a mnemonic device to remember faiths' contents, grammar and moral rules (Tabellarund Literal Methode).Felbiger applied the Normal method to the teaching of religion, too. 8techism was taught in elementary schools, where most teachers were priests.During Maria Theresa's reign, Enlightenment ideas had already started to enter into catechisms: in Austria and Bohemia the most widespread until 1777 was the Katechismus für drei Schulen (1750) by the Jesuit Ignaz Parhamer, who had already stressed the importance of actually understanding the texts and not just memorising them. 9The abbot Felbiger, who had already written the new school books, wrote a catechism, which met with opposition from Cardinal Christoph Anton Migazzi, archbishop of Vienna, who criticized his encouragement to read directly from the Bible as well as his usage of the Lutheran version of the Psalms.Thousands of people in the Monarchy were closet Protestants (Geheimprotestant), who pretended to be Catholic, especially in Bohemia, Moravia, Carinthia, Styria but also in Austria.They were suspected of disloyalty, Maria Theresa having fought two wars against Protestant Prussia.Traditionally, Catholicism was one of the main unifying bonds of the Habsburg Monarchy, so in spite of the acceptance of a more tolerant religious attitude, Maria Theresa did not share Joseph's more open views. 10ready accused of Protestantism for his school reform, Felbiger had to defend himself.He had modelled his catechism on Fleury's one, with increasing levels of difficulty.His catechism was rejected by the Roman Inquisition. 11Maria Theresa, who always backed Felbiger, then personally chaired a commission, in which both Migazzi and Felbiger sat, which rapidly produced the Einheitskatechismus (Standard catechism), to be used in all schools and from 1781 throughout the entire Monarchy. 12e issue was singularly relevant and involved pedagogical aspects and teaching methods as well as philosophical ideas.Various catechisms were used for religious instruction in the churches of the Habsburg dioceses, however, the same school catechism had to be used everywhere. 13The purpose of this article is neither to go through the school catechisms nor to examine them from a theological point of view.The aim of this paper is to present how this question was addressed and to what extent new pedagogical ideas were accepted in the texts and handbooks for catechists, and/or were imposed as compulsory texts for future school catechists.Written by priests and educationalists, these texts had a profound impact on the Empire and were recognized as prominent at the time, as well as in the historiography.Analysing this should allow us to check if and how, after decades of debate, the Enlightenment ideas (particularly the pedagogical ones) were present in the manuals about the teaching of religion in the Restoration age in the Habsburg Empire, when centralization of school manuals and textbooks became completely effective. THE SOCRATIC METHOD Socrates was a key figure in the Enlightenment: his logical criticism, his irony, his reasoning in religious matters, his moral rectitude and his death all made him a «lay Christ», a «pagan Saint», a master of virtue, the symbol of the man who was able to think in an autonomous way.Christian F. Gellert was called the «European Socrates».Johann Christoph Gottsched compared Christian Wolff to Socrates for the hostility he had to face in the religious domain. 14Philanthropists like Ernst Christian Trapp, Johann Bernhard Basedow, Friedrich Eberhard von Rochow, Joachim August Campe, Johann Stuve and Augustin Hermann Niemeyer used rationalistic catechetics, named «Socratic didactics».In 1780 the Philanthropist Christian Gotthilf Salzmann published a successful book on how to teach religion to children where he presented the Socratic dialogue as the correct way to do this. 15Socratic didactics stressed the use of reason over memory and hence opposed not only traditional teaching, but also the «Normal method». 16The Philanthropists rejected n Simonetta Polenghi Historia y Memoria de la Educación, 4 (2016): 49-84 56 the Socratic method sokratisieren in favour of the katechisieren, defining the first as a method that goes from simple and unknown to complex concepts.The word Mäeutik (Maieutics) was introduced into the German language in the second half of the eighteenth century, as a Greek word.By Maieutics, the Philanthropists and the philosophers of the Enlightenment such as Kant or Lessing meant the positive aspect of Socratism: they did not aim to develop the negative, critical, sceptical irony of Socrates, which could lead to radical doubt.Instead their aim was to teach how to put questions properly, in order to make pupils reflect, using their own intellect.Catechetics became synonymous with Sokratik and Erotematik (the art of putting questions). 17 Joseph's decade, Felbiger's «Normal Method» was replaced by the Socratic pedagogy, throughout Austria, Salzburg and Bavaria.The Tabellar-und Literal Methode was dropped.The influence of the German pedagogy of Philanthropism grew.Both school textbooks and children and juvenile's literature bore the stamp of Enlightenment culture, where ethics was separated from religion. 18e Archbishop of Vienna, Cardinal Migazzi, opposed this tendency in vain.He had once been an advocate of the katholische Aufklärung, but then turned completely against it, when he saw how Joseph's policy was abandoning Rome.In 1789 Joseph II ignored Migazzi's protests against Campe's and Salzmann's books, refusing to censor them, as did Leopold II a year later. 19Under Joseph, the catalogue of prohibited books was reduced from 5,000 to 900. 20lightened Catholics and Febronian bishops, who supported religious reform, were inclined to see some good in Protestantism. 21he rationalistic Socratic method of J. A. Gall in Joseph's age Joseph Anton Gall (1748-1807) was one of the most convinced exponents of the educational Josephinian theories.Born in Baden, in the free Empire town Weil der Stad into a middle class family, he studied with the Jesuits in Augsburg and in Heidelberg.He then went to Vienna, where he learned the Normal method, working closely with Felbiger and becoming a catechist in Vienna's Normalschule.However, he came to disagree with Felbiger's methods.Influenced by Basedow, Gall criticized Felbiger's focus on memorization.At Gottfried van Swieten's suggestion, Joseph II appointed him Chief Inspector of Schools in Felbiger's place in 1784 and Bishop of Linz in 1788 (in spite of the fact that Gall was from humble origins).School reform was closely linked to religious policy. 22Gall backed Joseph's Church policy, was in favour of the closure of monasteries and was hostile to baroque piety.With Leopold Ernst, Count of Firmian, and Hieronymus Joseph Franz de Paula, Count of Colloredo, bishops of Passau and Salzburg, Gall was one of the three bishop leaders of the Austrian katholische Aufklärung in Joseph's age. 23ut so strong was the influence of rationalism and Protestantism on his thought, that his theological adherence to Catholicism is questionable. 24eading his writings makes one doubt the correctness of the term katholische Aufklärung instead of christliche Aufklärung when referring to his works. In1783-84 Gall published in Vienna (significantly under a pseudonym) his most famous work, in three volumes: Sokrates unter den Christen in der Person eines Dorfpfarrers (Socrates among the Christians as a country parson). 25Here he applied in a very clear and simple way the key concepts of Josephinian religiosity.Still anchored in Felbiger's katechisieren, his didactics developed the Socratic method (sokratischeerotematische Methode) which, as mentioned above, aimed at stimulating reason more than memory, using a dialogic form.The title of the book already indicates Gall's intention.Socrates, emblem of the Enlightenment pedagogy, lay critic of pagan religion, master of the art of questioning was embodied in a parson, who brought the Light to country people, as the Greek philosopher brought the Light of the Truth to the pagans.In his introduction, Gall made a comparison between Catholic peasants' devotion and pagan beliefs.Not without irony, he wished he could be spared drinking hemlock so that he would be able to carry on explaining his ideas in public. 26The fact that Gall used a pseudonym shows that in 1783 he had to be careful of criticism (he was indeed accused of being a Freemason), but with the Emperor's backing he would soon be appointed Chief Inspector of Schools and then even bishop. Gall's ideas were radical, but he cunningly put them in such a way that they seemed undeniable, for they were based on simple and logical reasoning.The book Sokrates unter den Christen in der Person eines Dorfpfarrers is constructed as a series of dialogues between the Socratic parson and children, ignorant women, ex-nuns, poor people etc.The parson's way of reasoning was apparently maieutic, but he used simple reasoning to lead people to accept what he thought.In this way, Gall dismantled many traditional expressions of popular piety. The first two dialogues, for instance, belittled Marian devotion: why pray with the Rosary, which has ten avemarias every paternoster, when God is superior to the Madonna and the paternoster is actually the only prayer that Jesus taught us?It is always better to pray directly to God, rather than to ask for Mary's intercession.To the little girl who replies that it is easier for her to pray to Mary, for she reminds her of her mother, who is sweet and manages to obtain special concessions for her from her father, the Socratic parson coldly replies that God is loving too.Praying in Historia y Memoria de la Educación, 4 (2016): 49-84 front of statues was compared by him to foolishly worshipping a graven image, as pagans did.To children who prayed in front of a statue of Jesus, the Socratic parson easily proved that it was only a wooden statue, stressing the philosophical difference between a being and a being's image and making the children feel foolish.In another dialogue, an old woman prays in front of a statue of the infant Jesus, because she thinks his face is so loving and sweet that it encourages praying.The Socratic parson dismisses her attitude with his rationalistic logic.He reminds her that Jesus was a grown man, died when 33 years old and then rose again from the dead.Jesus now sits in heaven as an adult, so we must pray to him as he is, just as she addresses her 30 year old son as an adult and not as child any more. 27votion to the infant Jesus was fairly widespread in the seventeenth century, especially in convents and in Berulle's Oratoire and was the sign of a warm, innermost piety as well as of a respect for childhood. 28But Gall objected that the images did not stimulate the intellect and made God a tangible being, as in the pagan world.Gall's position was clearly anti-Jesuitic, and was far from the stimulation of the senses provoked by Ignatius' Exercitia spiritualia. In another dialogue, Gall explained to a peasant that Calvinists and Lutherans were not wicked: their mistake did not originate from an evil heart, but from the fact that they were born into a Protestant family and environment, just as Catholic children learned their faith naturally from their parents, teachers, and parish priest. 29Gall is very skilful in making the reader identify himself with a Protestant.His reasoning shows the cultural atmosphere that led to the Toleranz patent, but the logical consequence of the dialogue is actually the equivalence of denominations. Gall also strongly contested papal primacy and in his second volume vehemently opposed monasteries and congregations, which are not mentioned in the Gospels.Contemplative activity and mysticism were misunderstood by Gall.The typical Enlightenment utilitarianism here met Protestant polemic and sided wholly with Joseph's policy of suppression.In the third volume, Gall came out in favour of priests' marriage, on the ground that the Apostles were married and celibacy is not mentioned in the Gospels, using arguments popular in Protestant theology. 30rdinal Migazzi's opposition was entirely predictable.He was opposed to the excesses of baroque piety, a devotee of Muratori and one of the first supporters of the katholische Aufklärung.But he came to fear the effects of a reformed Catholicism that used Protestant arguments and caused bewilderment in the simple faith of uncultivated believers, so Migazzi became a fierce opponent of Joseph's policy. 31ore than being a «regulated» one, this devotion was dominated by a strong rationalism, that aroused doubts in the reader's faith; that refused mysticism; that did not understand the sense of Franciscan or Ignatian's spirituality; that reduced praying to a mere intellectual exercise, arid and far from the hearts of believers, especially children and the illiterate, who were used to visual representations.The criticism of baroque piety was conducted by Gall in a rationalistic way, which presented to simple people logical reasoning, which, correct as it might be, failed to move the heart.Denying the intercession of Mary and the saints, moreover, was a negation of a theological point of the Catholic faith. 32The effects of the Emperor's religious tolerance were frightening for the Cardinal and for the Pope: the Toleranzpatent had encouraged clandestine Protestants to declare themselves and Catholics to change their religious allegiance.The number of Protestants in the Empire doubled rapidly in the Eighties. 33 regards pedagogy, Gall accused Felbiger's Normal methode of being too mechanical and memory-dependent.Thus, he contested the Tabellar-und Literal Methode, which was a pillar of Felbiger's didactics Historia y Memoria de la Educación, 4 (2016): 49-84 and instead gave great prominence to moral short stories, introduced into school texts by the Prussian Philanthropist von Rochow with his Kinderbuch.Soon after his appointment as Chief Inspector of Normal Schools, Gall asked for all Felbiger's textbooks to be replaced, but the Studienhofkommission refused, objecting that this operation would cost too much.The Studienhofkommission did not entirely accept the reform that Gall was pursuing as Chief Inspector and in 1786 abolished Felbiger's Tabellar -und Literal Methode, but at the same time established that a memory-based didactic method had to be maintained, especially for religious subjects. 34ll then published a primer, a reading book and a little book on ethics that was modelled on Campe's Sittenbüchlein.Gall in fact depended explicitly on Philanthropism, from Villaume to Campe 35 .Gall's maieutic was heavily rationalistic and stressed the importance of the function of the intellect in knowledge, diminishing the role of memory.However, if the technical nature of the Normal method undoubtedly carried the risk of every day teaching becoming an arid mnemonic mechanical process Felbiger's pedagogy did not assign primacy to memory either: the aim of memorizing was to discover the Truth, not simply to fill the organ of recollection.The memory allowed access to the essence of one's being, in accordance with Augustine's philosophy.The will had then to complete the moral process.The memory indicated the representation of beings, that the intellect had to link through logic and mathematical language.The difference between Felbiger and Gall's pedagogy did not lie in diverse conceptions of intellect, but in a different notion of memory and of will.Felbiger took from Plato and Augustine both the positive view of memory, as a real channel of knowledge and way to God, and the pessimistic opinion on will, corrupted by the original sin.By contrast, Gall followed the rationalism of the Enlightenment and of Philanthropic pedagogy, reducing memory to a purely mechanical role and instead adopting an optimistic conception of the will, which adhered to the good, once recognized through the intellect, as Socrates thought. 3634 As well as Socratic dialogue, Gall also used another narrative model to make the evangelical message simpler and clearer: the parables.In 1794 he published an anonymous work on religious parables for children and adults, in three volumes (Parabeln). 37However, Gall's parables had very evident and simplistic allegorical meanings (the father portrayed God, his children the men; the rich lord represented God and his peasants the men, and so on).He wrote that this kind of prose was an effective way of teaching, since it reminded the user directly of Christ, whilst also stimulating curiosity and being easy to remember. 38But rather than being short parables, like those told by Christ, they were short stories, followed by moral explanations.Gall's aim, as he explicitly declared, was to make people adore and serve God, avoiding superstition and using images and ceremonies in a rational way.The book was successful.In 1794, the Piarist priest Franz Innozenz Lang, member of the Studien-Revisions-Hofkommission (the body that replaced the Studienhofkommision for a few years) praised it and it was reprinted in 1797 and 1820. 39In 1812 Andreas Reichenberger, influential Josephinist professor of pastoral theology at the University of Vienna, also praised these three volumes. 40ut on 16thFebruary 1822 the Studienhofkommision forbade the use of these volumes as school prizes: their rationalism had been superseded, as we shall see. 41Meanwhile, the Socratic method spread in Salzburg and in Bavaria, where Bernhard von Galura, canon of the Cathedral of Freiburg im Breisgau and later Bishop of Brixen, adopted the Socratic catechetic to explain the sacrament of the Eucharist. 42 Franz Michael Vierthaler and the revision of Socratism During Joseph's reign and later, some teachers taught a natural and moralistic religion, in which the differences between the Catholic and Protestant faiths faded.In 1790 several bishops lamented this to Leopold II. 43The first important change to the Socratic method came from Franz Michael Vierthaler (1758-1827).Born in Bavaria into a humble family, he studied under the Jesuits and at Salzburg University.In Salzburg he was appointed School Director and head of the Normalschule by the bishop.In 1791 Vierthaler published the Elemente der Methodik und Pädagogik 44 (Elements of Didactics and Pedagogy), a textbook for trainee teachers, which went to four editions, the last revised one in 1810.He taught pedagogy in the seminary and published the Geist der Sokratik (The Socratic Spirit), a successful book that had three editions: 1793, 1798, 1810. 45He also taught at the university.When Salzburg became part of the Habsburg territories in 1806, he was appointed Director of the Orphanage of Vienna.He ameliorated the life and the educational system of the pupils and made the Orphanage a model one. 46erthaler belonged to the Enlightenment culture too, but managed to maintain a solid link to Catholic pedagogy.He had a very good knowledge of Greek and Latin literature, which allowed him to rewrite the Socratic pedagogy.He quoted not only Plato and Xenophon, but many Greek writers (such as Homer and Plutarch) and Latin authors (especially Cicero, Horace, Quintilian).He knew Rousseau and Filangieri, and many educationalists in the German language: Basedow, Salzmann, Campe, Resewitz, Rochow, Villaume, Gellert, Weisse, Felbiger, J. M. Sailer, Kant, and Pestalozzi.Vierthaler went back to the essence of the Socratic method and claimed that nobody really applied it.The so-called Socratic method was just a string of questions on difficult matters, that expected children to give too complex answers.In the Geist der Sokratik he accused the existing catechisms of using an obscure and dry language, unsuited to children. 47Questions were arid riddles (the Philanthropists used riddles in teaching), 48 but children should enjoy lessons.Raising objections should be a means of entertaining children in an enjoyable way and children should use their own words in answering questions. 49e Socratic method of the time, he pointed out, was a mnemotechnic, whereas Socrates' real one was a psychological tool: Socrates made people use their reason, helping them to find the truth by themselves and not through an external authority.Posing questions did not mean being Socratic: it was necessary to learn how to ask questions properly, so that young people would answer in their own words.Objecting and contesting should have the aim of making young people think but also of enjoying the process of learning logically.Children should like this activity, taking pleasure in their capacity to answer objections.The maieutic method was not boring; on the contrary, it stimulated a meta-knowledge: «Guessing is an exquisite joy for children, and nothing stimulates thinking so much, as the awareness of being able to think». 50sides, Socrates not only used maieutics, but also sermo continuus, as well as allegories, analogies and fables.Vierthaler recommended the use of short stories, fables, poems and sayings, in order to stimulate children's heads and hearts.Contrary to Salzmann, he believed that not only the parables and the Gospels, but also certain stories from the Old Historia y Memoria de la Educación, 4 (2016): 49-84 Testament were particularly suitable for teaching religion to children.Fables, «more ancient than history», also contain a wisdom which goes straight to the heart and form a narrative canon beloved by children, who talk to animals and give life to inanimate objects.In this, Vierthaler refuted Salzmann and Rousseau, who denied fables any educational value, and accepted the traditional pedagogy, which was rooted in the Classical Age. 51In addition to fables, he recommended short poems, for children love rhymes, and proverbs.But since proverbs are frequently not ethically sound, it is necessary to teach them in a critical way. 52e Socratic method had to be adapted to pupils' different ages and characters: the didactics had to be used in a flexible way. 53The Normal method was also too mechanical for him.The primary educational aim was the development of morality, and religious education was the educational medium.Purely subjects teaching, which did not provide pupils with ethical and religious values, would produce arrogant boys, full of themselves.In opposition to Rousseau's thinking, Vierthaler claimed religious education should not be delayed until the pupil was 15/16 years old, that is to say when he had acquired full control of his logical faculties, but had to begin at birth.The Christian truth is comprehensible to children too, because it speaks to the heart as well as to the intellect.Indeed, Jesus himself used different speech registers according to his audience and spoke to children and the illiterate.Parables and episodes of Christ's life are particularly good for children's minds, which can grasp the abstraction only if it is shown in hard facts and material objects. 54erthaler's conception was halfway between Enlightenment and Catholicism: he saw religion in a utilitarian way, as a medium and not as the superior aim of education, but he refused a rationalistic teaching of religion, especially in Elemente der Methodik und Pädagogik (1810), where, in spite of the fact that he defined Anton Gall as «one of the worthiest Austrian bishops», 55 he explained that children do not have 51 Rothbucher, Franz Michael Vierthalers «Geist der Sokratik», 77.52 Vierthaler, Geist der Sokratik, 177-212.53 Beranek, Die psychologischen und bildungstheoretischen, 140-150.54 Vierthaler, Geist der Sokratik, 82-83; Vierthaler, «Elemente der Methodik und Pädagogik», in Ausgewählte pädagogische Schriften, 151-155. n Simonetta Polenghi Historia y Memoria de la Educación, 4 (2016): 49-84 adults' logic, hence it is not necessary to prove religious concepts to them: it is enough if they believe in them for they will reflect on them later on.The assurances and examples of the people they love (parents, relatives, teachers) are better than long and difficult explanations.The God of Reason is the philosophers' God; he is not the Father: children do not understand the first, but surely know how a loving father behaves.Moreover, children may understand certain truths better than adults.In these passages eventually Vierthaler leaves rationalism behind and partially contradicts what he wrote in the Geist der Sokratik.Children are not so much impressed by logical reasoning, but by examples.Here he opposes Kant and the gap between ethics and metaphysics, between morality and religion: Christianity is in fact both. 56t how to teach Christian truths to small children?A child's catechism must be written in such a way as to be liked and understood by its young readers.A catechism written for children of different ages and suitable for children's minds still did not exist, in spite of Felbiger's attempts, although some Austrian theologians had started moving in the right direction.The desire to be clear and the concern to be orthodox produced catechisms that were correct, but unsuitable for young minds.It would be better to stick to the Gospels, reproducing Christ's words, rather than using men's formulas.Religion does not concern just memory and reason, it involves the heart too. 57om an Enlightenment and Philanthropic background, Vierthaler then moved towards neo-Hellenism and the Catholicism of the new century.In the 1810 edition of the Elemente der Methodik und Pädagogik he quoted Frint and Leonhard; in the 1824 edition of the Entwurf der Schulerziehungskunde he quoted Milde. 58Tackling the classic question, whether the State had to educate the man or the citizen, he answered that the State had to educate both, assigning priority however to the Bildung: humanity should not be sacrificed on the altar of the State.Faced with a conflict between the State and humanity, the second should prevail. 59This limitation of the power of the State was far from 56 Vierthaler, «Elemente der Methodik und Pädagogik», 153-156. 57Vierthaler, «Elemente der Methodik und Pädagogik», 158-161. 58Vierthaler, «Entwurf der Schulerziehungskunde», in Ausgewählte pädagogische Schriften, 172. 59Vierthaler, «Entwurf der Schulerziehungskunde», 172-173. Gaheis was influenced by German Philanthropism (Basedow, Campe, Rochow and Salzmann), by Kejetan Weiller and the Bavarian Enlightenment, but also by Kant and Pestalozzi.Socratic didactics should aim at stimulating pupils' interests.Education should respect the child's nature and develop not only his intellect, but all his faculties, as Pestalozzi had pointed out.In the 1809 edition of his Handbuch Gaheis dedicated a full chapter to Pestalozzi, who was not widely known in Austria at that time. 63e described Lienhard und Gertrud as a book that he could not recommend highly enough 64 .In spite of the pedagogical value of his Handbuch and the success it achieved (it was widely used in Bavaria, and in Austria it remained the key text for trainee teachers up to 1817), this book never enjoyed official recognition from the Studienhofkommission and was never imposed as an official text for trainee teachers, because Gaheis considered religion to be just one of the subjects that children should be taught and only the short tenth chapter of the Handbuch was dedicated to catechism. THE AUSTRIAN CATHOLIC PEDAGOGY: V. E. MILDE AND HIS FOLLOWERS In the age of Francis II/I, after the French Revolution, three cultural trends competed in Vienna: the Josephinists, who retained a significant position in the Universities of Vienna and Prague; the Roman Catholic wing, originally led by the ex-Jesuit Nicolas Josef Albert Dießbach, and subsequently by Fr.Klemens Maria Hofbauer; and the Austrian Catholic wing, which rejected theological rationalism but approved jurisdictionalism.Milde, Frint, Leonhard belonged to this last faction, which enjoyed the Emperor Francis' support. 65In Francis' long reign 63 Cfr.Franz Gaheis, Handbuch der Lehrkunst für den ersten Unterricht in deutschen Schulen (Wien: Doll, 1809, 4.° ed.), 284-296, the edition I could consult.Pestalozzi indicated in the visual intuition (Anschauung) the foundation of knowledge and therefore of teaching: «Ich habe den höchsten obersten Grundsatz des Unterrichts in der Anerkennung der Anschauung als dem absoluten Fundament aller Erkenntnis», Johann H. Pestalozzi, Wie Gertrud ihre Kinder lehrt.Gesammelte Werke (Zürich: Rascher, 1949), 237. 64Gaheis, Handbuch der Lehrkunst, 292. 65 (1792-1835) historians have distinguished two periods: a first phase of late Josephinism, up to circa 1820, and a second phase of overcoming Josephinism, and of Restoration.Francis had breathed an anticlerical atmosphere in Florence and in Vienna.K. L. von Metternich, Foreign Minister from 1809 and Chancellor from 1821, was not against Josephinism.66However, during the meeting between Pius VII and Francis I in Rome in 1819, the Pope demonstrated to the Emperor how many Austrian theologians were not exactly following the Vatican guidelines and Francis then distanced himself from Josephinism. As for the debate about teaching methods, on 9 th August 1803, the Councillor of State Martin Lorenz, responsible for schooling, accepted Vierthaler's ideas and maintained that the sokratische Methode was being wrongly used in the last two years of the Hauptschule: the method of finding by oneself (Socratic) is based on the principles of syllogistics: two sentences are presented to the pupil, who has to find a third one, putting them in a relationship.This exercise requires more logic than a child has and leads children to obscurity. 67renz was a Josephinian priest, but he was influenced by Augustin Gruber, leading catechist of the Vienna Normalschule and future bishop of Salzburg, who fought against the rationalism of the Sokratische Methode 68 .Lorenz worked at the new law about schooling, which was issued only in 1805, due to the delay caused by the Napoleonic war, and which prescribed that the teaching method should educate in a harmonious way all the faculties of the soul («übereinstimmende Bildung aller Seelekräfte»).With this law, the Politische Schulverfassung, the Socratic method was officially dropped. 69A new method had to be shaped: a Chair of Education had to be established at the University, in order to provide a uniform educational theory and method for the Empire.This was Milde's task. V. E. Milde's pedagogy Born in Moravia, Vincenz Eduard Milde (1777-1853) studied in the Seminary of Vienna, reformed first by Joseph II and then Leopold II. 70he future priests lived in the seminary but studied at the University of Vienna, still imbued by Josephinism.Milde studied Oriental languages and Old Testament with the Moravian Johann Jahn, whose interpretation of the Bible raised Cardinal Migazzi's protests with two of his books being condemned by the Vatican.The Benedictine monk of Melk Anton Reyberger, professor of Moral Theology and a follower of Kant, also exerted his influence on Milde.History of the Church was taught by Mathias Dannenmayer, already professor in Friburg, whose ideas were antipapal and pro-Protestant and who claimed the clergy should be subject to the Emperor rather than to the Pope.In spite of Cardinal Migazzi's protests, he retained his chair, even after Joseph II's death.His book on the history of the Church was on the Index Librorum Prohibitorum.Andreas Reichenberger had the chair of Pastoral Theology.He too was a Josephinist and reduced Catholic religion to ethics. 71 the end of the eighteenth century Vienna was imbued with German culture: Klopstock, Gellert, Gessner, Lessing, Lavater, Jakobi, Mendelsohn and Goethe were all well known.In pedagogy, Rochow, Campe, Resewitz, Villaume and Salzmann were widely recognised authors, read also in educational institutes.Rousseau was familiar.Reyberger and Milde had a full mastery of this literature. 72Milde was ordained priest in 1810.He had shown his educational capabilities as a catechist in the Normalschule of St.Anna, in Realschulen, and in a girls' boarding school.In 1805 he was appointed Court Chaplain and in this office, held during the difficult Historia y Memoria de la Educación, 4 (2016): 49-84 period of the Napoleonic wars, he gained the Emperor's esteem, leading to his appointment as the first Chair of Education of the Habsburg Empire, in the University of Vienna in 1806, at the young age of 28.In 1810 however, due to fragile health, he had to give up the chair and leave the Court, retiring to a small parish in Low Austria. He published his academic lessons in the two volumes of the Lehrbuch der allgemeinen Erziehungskunde zum Gebrauch der öffentlichen Vorlesungen (Textbook of general pedagogy for use in public lessons) (1811-13), which from 1814 were to be used as the sole official text in all the chairs of the Empire up until 1848.In 1814 he went to Krems, where was appointed Inspector of the district's Elementary Schools and director of the Superior Institute of Philosophy.In 1823 Francis I appointed him Bishop of Leitmeritz (Litoměřice) in Bohemia and in 1832 Archbishop of Vienna, in spite of his humble origins.Milde dedicated constant care to the education of the clergy.He was known as a brilliant educator, so much so that it was said that no bishop in Vienna had ever had the same ability to deal with children.In 1848 he did not back the revolution and helped the government to keep order, reminding the clergy to stay out of politics.He warned against the dangers of the freedom of the press and remained faithful to the Habsburg Monarchy and was hence criticized as conservative.His Josephinian education and the favour he had enjoyed from Francis were influential in defining his political view.He died in 1855 and is buried in St. Stephen's Cathedral. 73r a long time forgotten by the historiography, Milde has been enjoying a reappraisal in the last twenty years, so much so that he has come to be considered the greatest Austrian educationalist of the nineteenth century. 74His pedagogical system is wide and solid, based on a 73 Franz Loidl, Geschichte des Erzbistums Wien (Wien, München: Herold, 1983), 222-232.deep knowledge of pedagogical, philosophical, psychological and medical literature of the time.Heir to Josephinism but far from Gall's excesses, he presented a scientific and modern pedagogy, oriented towards ethics and strongly influenced by Kant.Milde respected Rousseau, but was more a follower of Kant.The human being has an inner moral law that comes from nature and from reason.Nature comes from God but is studied by science.Human attitudes are God's gifts.Anatomy and physiology show that the brain controls the body, but the body is connected to a spiritual force: the soul gives life, movement and aim to the body.Leibniz's and Aristotle's ontologies were at the root of his anthropology.Culture and nature were connected.Education had to respect nature, not to force it.This respect came from the recognition of God's wisdom as creator.Comenius, Rousseau, the Philanthropists and Pestalozzi were the educationalists he drew upon in this respect.The highest aim was the Selbstbildung, an idea that came from Kant.The pupil had to develop the ability to learn by himself.A method which relied solely on mnemonics was therefore of no use.Too many teachers believed they had to give their pupils knowledge: instead, they had to make them think.The faculty of memory was nonetheless very important, but it had to be developed in harmony with the other faculties.The memory is «indispensable for every operation of thinking.Reasoning depends partially on knowing and this depends on the culture of the memory».But one had to distinguish between a «mechanical memory», linked to impressions, and a memory connected to logical reflection. 75ason had to be stimulated gradually: contrary to the pedagogy of the Enlightenment, Milde criticized those who tried to make a child be rational before his time, hence not respecting his nature.Stimulating the intellectual faculty precociously only had the effect of filling the child's mind with empty formulas, not really understood by him.A deep knowledge can only be reached through a gradual and slow process.Memory and reason were important, but so were emotions, feelings, impulses and the will.Milde mastered the psychological literature of the time, the anticartesian movement of the second half of the eighteenth century, Herder, Karl Philipp Moritz and Friedrich August Carus, n Simonetta Polenghi Historia y Memoria de la Educación, 4 (2016): 49-84 child became an adolescent or an adult, in order to respect his freedom and let him choose freely whether to believe or not, was an illusory liberty, that led to religious indifference.The best way was to teach religious ideas which moved children's hearts: «we do not forget what we love». 81Much more effective than memorising formulas and precepts or than rational reasoning is example (of parents, educationalists and great men).Also religious rites had to be modified, in order to make them age-appropriate for children. 82lde did not write a new catechism, but gave recommendations about the didactics, which were important for religious education too.He respected Felbiger, but considered him outdated.Like Niemeyer, Milde thought the teacher-led lesson, being close to a lecture, was only good for adults and well educated persons.For children, the erotematic or dialogic method was more appropriate 83 .But how to ask questions?The old mnemonic kathechisieren with written questions and answers to learn by heart was superseded, as was the rationalism of Gall's Socratic method, thanks to Vierthaler's work.The dialogic method should be founded on the observation of children and should respect their spontaneity.Lessons should start with concrete things and then shift to abstract ones, should present examples first, then the rule, in a synthetic and inductive way. J. Frint and the education of the clergy Jakob Frint (1766-1834), a close friend of Milde and the founder of a new institute in Vienna for the education of priests, the Frintaneum, was influenced by the Enlightenment too, but distanced himself from it.Born in Kamnitz (Česká Kamenice) in Bohemia, Frint studied in Graz and Leibach.He wanted to become a priest, but refused to enter into one of Joseph II's seminaries.In Vienna he was a follower of Dießbach.In 1792 he entered the new seminary of St. Stephen, opened by Cardinal Migazzi.Two years later, Milde entered the same seminary and soon befriended him.Frint was appointed Court Chaplain in 1801 and in 1804 in Vienna became Professor of Religious Science, a new subject introduced by the 81 Milde, Trattato di educazione generale: adattato all'uso di pubbliche lezioni, 346. 82Milde, Trattato di educazione generale: adattato all'uso di pubbliche lezioni, 338-349.but was above it, and Catholic Enlightenment, he admitted, had brought achievements that it would be anti-historic to refuse. 88Teaching religion just by obliging children to learn formulas by heart was not the right way.Along with Augustin Gruber, Archbishop of Salzburg, J. M. Leonhard, Archbishop of St. Pölten, and Bernhard Galura, Archbishop of Brixen, Frint criticized and overcame the Socratic method and the rationalism of the catechetics.They agreed in assigning an important role to memory and the heart in the teaching of religion to children: the aim of the catechism is not to demonstrate natural religions, but to teach the Holy Scriptures and the Revelation, and therefore a moral Christian life. 89 accordance with the Emperor and in keeping with the views of the Bavarian Sailer, in 1816 Frint, as mentioned above, opened the Augustineum (or Frintaneum) in Vienna, a new super-national institute for the education of young secular priests coming from all the territories of the Empire, chosen by the bishops for their excellence.The aim was to educate an élite of priests of rigorous morality, great intelligence and clear pastoral capabilities: the reform of the clergy was the first step to reforming the people.His references for this were St. Charles Borromeo, cardinal Bérulle, St. Vincent de Paul and the seminary of St. Sulpice. 90 Johann Michael Leonhard: how to teach catechism Johann Michael Leonhard (1782-1863), a follower of Milde and friend of Frint, was the one who actually renewed the teaching of catechism with his book Theoretisch-praktische Anleitung zum Kathechisieren (Theoretical and practical guide to catechism teaching) (1819).This was also translated into Latin in 1820, approved by the Studienhofkommission and imposed as the prescribed text for all trainee catechists of the Empire in 1821.In the same year his book Practisches Handbuch zur Erklärung der in 88 Lentner, Katechetik und Religionsunterricht in Österreich, I, 308-309, 311-313. 89Hosp, Zwischen Aufklärung und katholischer Reform, 89-91.den k.k.österr.Staatenvorgeschriebenen Katechismen oder Angewandte Katechetik was recommended by the Emperor for the education of future catechists, and indeed went through many editions. 91Leonhard published other books on religious education as well as the school catechism for the youngest class.His works were translated into Italian, Hungarian, Czech and Slovak.His school catechisms replaced the Einheitskatechismus and he became the landmark author for decades.In 1812 he had been appointed Court Chaplain; in 1816 Director Spiritualis of the Frintaneum.In 1817 he was nominated Chief School Inspector of Austrian elementary schools, an office which he held up to 1835, when he was appointed Bishop of St. Pölten. 92ke Milde, Leonhard favoured a didactic method which drew upon elements from both the katechisieren and the sokratisieren.He, too, focused the catechetic method on the function of the question, but he firmly anchored the teaching of religion in the Holy Scriptures.The aim was neither just to memorize faith contents nor to develop only reasoning capacities.Educating the will was the most important aim.The right method aimed at a harmonious development of the human faculties, as Milde had shown.He used the didactics of intuition (Anschauung), which he recommended, following Comenius and Pestalozzi, 93 and in accordance with the pedagogy of Joseph Peitl, another follower of Milde, who was the director of the Normalschule of Vienna and whose Methodenbuch was imposed as a compulsory text for all trainee teachers of the Empire, in 1821. Leonhard opened his Theoretisch-praktische Anleitung zum Kathechisieren with an introduction, where he distinguished between two teaching methods: the transmission method (mitteilende) and the development one (entwickelnde).The first is used in subjects like history or positive religion, where the teacher has to provide pupils with information; the second draws ideas from the pupil's own mind 91 (Wien: Überreuter, 1820).The book was reprinted in 1822,1826, 1832, 1845. 92Wurzbach, Biographische Lexikon des Kaiserthums Oesterreich, XV, 4-8.Not at ease with this role, he left the office in St. Pölten and accepted the more modest function of bishop in the Army and bishop of Diocletianopolis, in Palestine, and lived in Vienna.As a sign of recognition the Emperor appointed him Privy Councillor (Geheimrat) and conferred on him the honour of the Iron Crown 1 st class.and experience, making the child think.The development method keeps the children's attention and is to be used as much as possible, but is always to be used in conjunction with the transmission method.A good catechist must be able to use both methods, switching from one to the other, remembering that children's intellect is still fragile.His aims are to teach religious truths, convincing pupils and educating their will, so that they will become moral people.Catechetics is very different from learned theology or homiletics.Learned theology uses complete concepts, which would be too difficult for children.Indeed, teaching religion to children is a difficult task, much more so than is normally thought.Lessons must be easy and not boring, and the attention of the pupils must be carefully awakened and kept alive.As for the contents of catechism, Leonhard rightly pointed out that whereas in the past teachers insisted on Christian doctrine, which had to be learned by heart, leaving children passive, now the opposite mistake was being made: teachers preached on morality, leaving aside revealed truths.The aim of catechistic lessons is to provide a religious and moral education.Where possible, it is a good thing to use a logical demonstration, for it encourages a child to tend to the good.The intellect is a gift from God, so it is right to make use of it when teaching religion, provided the teacher does not go beyond the capacity of children's minds.Jesus adapted his speeches to match the audience's capacity for understanding, too and so did the Apostles: «Infants need just milk, they would not appreciate a more nutritious dinner». 94Leonhard was referring to St. Paul: «I fed you milk, not solid food, because you were unable to take it» (1.Cor.3,2). Questions like the link between body and soul, or all the scholastic questions such as how much Adam knew, or how could he know how to speak, should be left out.Children should not be confused with questions with no definite answer.It is then important to teach in a gradual way, from the known to the unknown, from direct experience and intuition (Anschauung) to abstract concepts, using analogies and showing relationships.The first truths to teach are original sin, God's attributes, nature, and then Christ.Many believed catechism should begin with natural religion, followed by the revealed one.Leonhard refuted this Historia y Memoria de la Educación, 4 (2016): 49-84 point, on the grounds that revealed religion moved the heart much more than a merely logical demonstration.In this way, he rejected Socratic rationalism.Leonhard underlined that children can easily understand the rationality of revealed truths, since they are already present in their minds, and just need to be clarified.Here Leonhard follows a Platonic approach.Since the teacher should adapt his lessons to his pupils' minds, it is very important to divide the classes, in order to form homogeneous groups of children, with the same needs and interests, hence taking into account the age and intellectual abilities of the pupils.Leonhard applies one of the best principles of Milde, which anticipated progressive education. 95e second part of the book dedicated a chapter of 71 pages to the education of the intellect, a chapter of 26 pages to the education of the will, and then, significantly, a very short one of 4 pages to the memory.Leonhard had already pointed out that with the old mnemonic catechetics on one side, and the rationalistic one on the other, the truth lay in the middle and that man is a rational being, yet one with a heart. 96However, a simple count of the pages he devoted to the three human faculties gives a clear indication of the role the intellect had acquired.Leonhard used a terminology that resembles Kant's, speaking of analytical and empirical judgments, of three main types of representations: external and interior intuitions, concepts of the intellect and ideas of reason.Reason is «the only faculty, which does not depend on the senses, it is the highest, pure spiritual power in men». 97He provided many examples, and stressed that knowledge comes from the senses, and acquires a unity through the intellect, so it is pointless to insist on concepts children cannot trace back to their own experience.Where that is impossible, the teacher must use an image, for instance a picture of the Ark of the Covenant, or of the Temple of Jerusalem.When a new idea is introduced, it is important to compare it with ones already known (for instance, the Temple of Jerusalem must be compared with our churches).The influence of Comenius is clearly present in these pages.However, Leonhard goes on for many pages, describing how to deal with abstract concepts, in a very detailed way, 95 Leonhard, Theoretisch-praktische Anleitung zum Kathechisieren, 32-37. 96Leonhard, Theoretisch-praktische Anleitung zum Kathechisieren, 41. n Simonetta Polenghi Historia y Memoria de la Educación, 4 (2016): 49-84 more appropriate for the teaching of philosophy, than for the religious education of children. 98 then describes examples, parables, proverbs and short biblical stories that can be used, and how the catechist must ask questions, in order to keep the children's attention and to help them understand the content.The catechist should meet children's doubts and bias with love and patience.But the most difficult job for him is to educate the pupils' will.He must always ask his pupils questions such as: what would God/ your parents/other good men think of you?Is what you are doing good?He should use moral short stories about childhood and train his pupils to judge and comment on moral/immoral deeds.Every moral rule is more likely to be remembered, the more it moves the heart, and the more clearly it is explained. 99onhard then spends many pages on language and its importance: to make a lesson enjoyable and interesting, and to engage the heart, the tone and the modulation of voice play a role, but other rules must also be observed.The catechist must be careful to avoid using words that the children do not really understand, and to explain the different meanings a word may have.He has to know the way children think and speak, to be able to make himself understood by them.Indeed, too many catechists talk to children as they do to adults.But, as Fleury said, to use a learned language with children is like talking to them in Latin or Greek: they will not understand and will just learn empty formulas by heart.The teacher must adapt his way of talking to the minds of children, he must observe children, talk to them out of school and read what they write.Once he can talk as children do, he can gradually bring children to understand the language of the Church. 100ese pages on language represent the conclusion of a long debate that started with the criticism of the fanciful and florid language of baroque homiletics, begun by the Archbishop of Vienna Johann Joseph Trautson, who in 1752 sent out a pastoral letter in which he condemned the baroque sermons and their redundant rhetoric and bombastic 98 Leonhard, Theoretisch-praktische Anleitung zum Kathechisieren, 66-78. 99Leonhard, Theoretisch-praktische Anleitung zum Kathechisieren, 125-153. 100Leonhard, Theoretisch-praktische Anleitung zum Kathechisieren, 153-176. Historia y Memoria de la Educación, 4 (2016): 49-84 language. 101The renewal of homiletics and catechisms brought about by the Enlightenment and by Muratori's «regulated devotion» reach their culmination with these passages of Leonhard, which overcome the dry rationalism of Gall, whilst maintaining the attempts to purify the language of the catechism which the Jesuit Ignaz Parhamer and Abbot Felbiger had already pursued in Maria Theresa's age. 102The pedagogical attention to children's minds and their way of thinking reach its high point here. Leonhard's analysis of language and of questions is detailed and is heir to Felbiger's Normal method: it recalls Felbiger's recommendations about how to construct a question, but his approach is placed in a wider pedagogical framework, that of Milde.It is in fact very close to Joseph Peitl's description of method and of the correct way to put questions in his Methodenbuch. 103The last chapter, dedicated to the qualities of the catechist, is also heir to Felbiger's Methodenbuch and his description of the desired qualities in a teacher.However, it surpasses Felbiger's approach, thanks to the influence of Milde and Vierthaler.The key virtue of the catechist is love: he must truly love his pupils and be very patient.Peitl said the same of the teacher.The catechist (as the teacher) must be a moral example, otherwise he will fail in his task.If children do not love him, they will not learn willingly; if they hate him, they will hate religion.He must know psychology, to be able to deal with children, so he must keep reading and studying psychology and pedagogics.Jesus must be his model; Jesus who said: «Let the children come to me, and do not prevent them; for the kingdom of heaven belongs to such as these» (Mt.19,14; Mk. 10,14; Lk. 18,16).Leonhard went on to provide a description of a child which echoed Rousseau and Pestalozzi, and certainly not St.Augustine, as Felbiger did: «children are particularly capable of understanding and gifted in following Jesus' doctrine, for their intellect is not infected with prejudices and mistakes and their heart is still pure from bad inclinations and passions» 104 . As stated previously, this book was the prescribed text for all trainee catechists in the Empire, and the Practisches Handbuch had four editions.Milde's textbook was compulsory in the Chairs of Education in all the Universities and high schools (Lyzeen) of the Monarchy; his followers Peitl and Leonhard were authors of the textbooks prescribed for elementary teacher training and for trainee catechists in all Habsburg territories.This article has shed light on textbooks for catechists and one should then examine the school catechisms.Nonetheless, given the fact that in the Habsburg Monarchy the school system was centralized and uniform; that school books and catechisms were imposed by the State; that the preparation of future teachers and catechists was strictly checked, the study of the manuals approved by the State provides significant evidence of how religion was to be taught -if not of how it really was always taught. To carry on this research, it would be necessary to examine the effective implementation of Leonhard's method.The recent historiography has proven the effectiveness of Milde and Peitl's pedagogical ideas on teacher training in the Habsburg territories.In the case of Leonhard, the position of bishops has also to be taken into account.In the Kingdom of Lombardy and Venetia, for example, the Theoretisch-praktische Anleitung and the Practisches Handbuch were introduced with years of delay (the Handbuch in 1841), since the bishops did not agree on the correctness of the translation, or on the novelty of the method: the Bishop of Brescia, for instance, insisted in 1819 that children still learn by heart (being very young and ignorant, they had to stick to simple concepts); the Chief Inspector of Lombardy elementary school Father Palamede Carpani, who approved Milde and Peitl's pedagogy, backed Leonhard, pointing out in 1822 that Italian catechists still made children learn by heart, rather than stimulating them to understand through rational deductions. 105So, the school catechisms were actually changed, but the pedagogical question about how they were taught in various territories of the Monarchy still has to be addressed. CONCLUSION At the end of the eighteenth and the beginning of the nineteenth centuries a new way of teaching religion was introduced, debated and contested in Habsburg Catholic territories.Rousseau and Salzmann's theories were discussed; rationalism and faith, natural religion and revelation were confronted.In the end more attention was devoted to child psychology and language.The cultural fracture caused by Josephinism became less severe: orthodoxy was restored, but new pedagogical ideas entered the teaching of religion, which were more respectful of children's development and their ways of thinking. It is not therefore correct to define the Austrian Restoration age simply as reactionary or conservative in the history of education as well as in that of schooling, for in officially taught pedagogy, the spirit of Enlightenment was still present, although in a milder form. 106The reading of short Biblical stories was retained from Protestant education, whereas Gall's sharp criticism against simple popular devotion was dropped.After the clash produced by the Josephinian approach, the orthodoxy of the content of the catechism was restored.The discussion on method was closely connected with the debate on general didactics, but since the subject to teach was religion, the question of the role of memory, heart and intellect was crucial.With Vierthaler and especially Milde, the Socratic rationalistic stamp decreased.As Leonhard wrote, thanks also to the reception of Pestalozzi, teaching religion engaged the intellect as well as the memory and heart, in a much more balanced way. The importance of understanding children's mind sets and language; the acceptance of some pedagogical and didactic principles of non-Catholic educationalists such as Comenius, Rousseau, Kant, the Philanthropists and Pestalozzi; and the desire for effective religious teaching improved (at least) the official instructions for the teaching of religion in the Habsburg Empire in the early nineteenth centuries. n Note on the author:SimonettaPolenghi is Full Professor of History of Education, Head of the Department of Education in the Faculty of Educational Sciences of the Catholic University of the Sacred Heart, Milan.2013-2017 Deputy Vicepresident of SIPED (Italian Pedagogical Society).Appointed National evaluator in the Italian Research Quality Evaluation 2011-14, for History of Education and Pedagogy (2015-2016).Member of the Executive Board of the international Journal History of Education & Children's Literature.Member of the referee and scientific board of various series and scientific magazines, among which Historia Scholastica (Prag) and Teaching Innovations (Belgrade).She has published four volumes, edited eight and published more than 90 essays and articles on history of university, history of childhood, history of schooling, history of special education in the modern age and in contemporary history.Awarded with two national prizes (1984 and 1993), she received a grant by the Austrian Academy of Sciences in 2008 and 2009.Awarded in 2011 with the Cross of honour for arts and sciences by the President of the Austrian Republic, for her studies on Habsburgic pedagogy and schooling.Key note speaker at the XIII National Hungarian Congress of Education, Eger, 7-9 Nov.2013.Awarded with the Comenius Medal from national Pedagogical Museum and Library J. A. Comenius of Prag in 2015.
2018-12-07T03:54:36.006Z
2016-05-25T00:00:00.000
{ "year": 2016, "sha1": "7f8c1c98da726bd8e384404997e349daae9bf05b", "oa_license": "CCBYNC", "oa_url": "https://revistas.uned.es/index.php/HMe/article/download/15629/14433", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f8c1c98da726bd8e384404997e349daae9bf05b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology", "History" ] }
17454346
pes2o/s2orc
v3-fos-license
Reproductive Biology and There is now considerable evidence for the involvement of K+ channels in nitric oxide (NO) induced relaxation of smooth muscles including the myometrium. In order to assess whether apamin-sensitive K+ channels play a role in NO – induced relaxation of the human uterus, we have studied the effect of specific blockers of these channels on the relaxation of myometrium from non-pregnant women. In vitro isometric contractions were recorded in uterine tissues from non-pregnant premenopausal women who had undergone hysterectomy. Apamin (10 nM) and scyllatoxin (10 nM) did not alter spontaneous myometrial contractions. However, 15-min pretreatment of the myometrium strips with apamin completely inhibited relaxation caused by diethylamine-nitric oxide (DEA/NO). The pretreatment with scyllatoxin significantly reduced (about 2.6 times) maximum relaxation of the strips induced by DEA/NO (p < 0.05). These results strongly suggest that, beside Ca2+ and voltage dependent charybdotoxin-sensitive (CTX-sensitive) K+ channels, apamin-sensitive K+ channels are also present in the human non-pregnant myometrium. These channels offer an additional target in the development of new tocolytic agents. Background Nitric oxide has been shown to be a potent inhibitor of spontaneous contractile activity of the myometrium from non-pregnant women. It has recently been shown that contrary to the finding in some smooth muscle, in the myometrium from non-pregnant women, there was no causal relationship between the relaxation induced by NO donors and the elevated production of cGMP [1,2]. A number of recent studies on both vascular and uterine smooth muscle have provided evidence for the involvement of potassium (K + ) channels in relaxation induced by nitric oxide (NO) donors [3][4][5][6][7][8]. In smooth muscle, K + channels play an important role in regulation of cell membrane excitability and contractile activity of the tissue [3][4][5][6][7][8]. K + channels consist of a diverse group of proteins with disparate structural features and controlling mechanisms. Calcium (Ca 2+ )-dependent K + channels have been found in many smooth muscles including myometrium from different species [4,[9][10][11][12]. Ca 2+ -activated K + channels up to now identified in human myometrium represent the type of large-conductance and voltage-dependent channels (BK) blocked by charybdotoxin (CTX) and iberiotoxin [11,13]. However, other classes of Ca 2+ -activated K + channels may exist in smooth muscle cells including K + channels with intermediate (IK) and small unitary conductance (SK) [14,15]. Ca 2+ -activated K + channels with small conductance found in different visceral smooth muscles [4,16,17] have so far not been identified in human myometrium. These channels, so called apamin-sensitive K + channels are specifically blocked by a bee venom toxin, apamin [18,19] and scyllatoxin (leiurotoxin I), a toxin from the venom of the scorpion Leiurus quinquestriatus Hebraeus [20,21]. Calcium-dependent apamin-sensitive SK channels and CTXsensitive BK channels can apparently co-exist in the same cell [18,22]. Recently, it has been demonstrated that the intermediate conductance K + channels, sensitive to both apamin and charybdotoxin exist in mouse intestinal smooth muscles and rat renal arterioles [14,15]. Although, inhibition of smooth muscle contraction by K + channels openers is a well-recognized mechanism, information on the expression and characteristics of various channels is needed to develop tissue and channel type specific K + channel openers. In order to assess whether apamin-sensitive K + channels play any role in NO induced relaxation, we have in this study examined the effect of specific blockers of these channels on the relaxation of myometrium from non-pregnant women. Methods Human uterine tissues were collected from 14 non-pregnant premenopausal women (age, 41-50 years; median, 46 years) who had undergone hysterectomy because of either dysfunctional bleeding, benign uterine tumors or cervical malignancy. All women were recruited from patients of the Department of Gynecology, Medical Academy of Bialystok, Poland. The women were informed about the nature and procedure of the study and gave their written consent. The local ethics committee approved the study. Myometrial samples were excised transversally from the fundus of uterus, placed in an ice-cold physiological salt solution and immediately transferred to the laboratory where processed as previously described [23]. Briefly, 4-8 strips, 6-7 mm in length and 2 × 2 mm of cross section area were obtained under a dissecting microscope. The strips were then mounted in an organ bath containing 20 ml of physiological salt solution (PSS) at 37°C, pH 7.4 and bubbled with carbogen (95% O 2 + 5% CO 2 ). Strips were left for the equilibration period of 1-2 hours. During that period the passive tension was adjusted to 3 mN. Activity of myometrium was recorded under isometric conditions by means of force transducers with digital output. The spontaneous contractile activity was treated as a control. After the recording of spontaneous activity the response of myometrium to nitric oxide and K + channel blockers was recorded. Quantification of the responses was done by calculation of area under the curve (AUC), amplitude and frequency of contractions. The area was measured from the basal tension over a 10-min period after each stimulus. The effects were evaluated by comparing experimental responses with the controls (set as 100%). Diethylamine-nitric oxide (DEA/NO), which has been shown previously to inhibit spontaneous activity in human [2,24] or rat [12] myometrium, in a concentrationdependent manner, was used as NO donor. Three or four strips from the same uterus were studied in parallel. One of them was always treated as a control and regularly washed with PSS. DEA/NO was given cumulatively directly into the organ bath in log increments within the concentrations range 10 nM to 100 µM. The effect of DEA/NO was observed in the absence and after 15 minutes preincubation with 100 nM CTX, 10 nM apamin or 10 nM scyllatoxin. The contact time for each concentration was 10 min. Only one concentration-response curve was obtained for each strip. Chemicals DEA/NO, a generous gift of dr Larry K. Keefer from Laboratory of Comparative Carcinogenesis, National Cancer Institute, Frederick, Maryland, USA, was dissolved in a 10 mM NaOH, and kept cold until dilution with cold pH 7.4 buffer immediately before addition to a bathing medium [25]. The concentration of NaOH in the organ bath never exceeded 0.001% v/v and had no influence on the experimental responses. Apamin and scyllatoxin, purchased from Sigma Chemical Company, were dissolved in distilled water. All substances were added directly to the organ bath containing physiological salt solution composed of (mM): NaCl 136.9; KCl 2.68; MgCl 2 1.05; NaH 2 PO 4 1.33; CaCl 2 1.80; NaHCO 3 25.0; glucose 5.55. Statistical analysis All data were analyzed statistically with PRISM 3.0 (GraphPad Software Inc., San Diego, Calif.). The data were analyzed with ANOVA, Friedman test or Wilcoxon matched pairs signed rank test, where appropriate. The statistical significance was considered when probability value was P < 0.05. Throughout the paper the results are expressed as mean ± S.E.M. and n denotes the number of tissues obtained from different patients. In the case when the same protocol was run on two strips from the same uterus, the data were averaged. Results All experiments were performed on myometrial strips exhibiting regular, spontaneous contractile activity after Cumulative administration of DEA/NO (10 nM -100 µM) caused an inhibition of the spontaneous activity in a concentration -dependent manner (Fig. 2). The mean AUC calculated for the highest concentration of DEA/NO was (51.95 ± 4.68)% of control. The effect was seen as a decrease of amplitude of contractions and a gradual reduction of its frequency (Fig. 1B). Both effects were statistically significant (Friedman test). Removing of DEA/NO from the bathing medium by washing with the PSS (two times in 15 min interval) caused gradual return of the contractile activity (Fig. 3). Charybdotoxin (100 nM), a blocker of Ca 2+ -sensitive potassium channels with large conductance caused no change of amplitude, and frequency of the spontaneous contractions (Wilcoxon matched pairs rank test) (Fig. 4). However, 15 min pretreatment of the strips with 100 nM CTX completely inhibited the DEA/NO induced decrease of AUC and amplitude of contractions (n = 10) (Fig. 2B and 5). In the presence of CTX, a small but nevertheless statistically significant decrease of the frequency was observed for the highest concentration of DEA/NO (100 µM). In experiments performed on tissues taken from 10 women, we studied the effect of apamin, a blocker of Ca 2+ -dependent K + channels with small conductance on DEA/NO -induced relaxation of the myometrium. Apamin at a concentration of 10 nM did not alter spontaneous myometrial contractions. In presence of apamin, the mean values of amplitude and frequency of contractions did not differ significantly from those observed before the blocker administration (Wilcoxon matched pairs rank test) (Fig. 4). However, pretreatment of myometrium strips with 10 Figure 2 Concentration -response relationships of DEA/NO-induced relaxation before and after pretreatment with blockers of Ca 2+sensitive K + channels: (A) DEA/NO alone (n = 10), DEA/NO after preincubation with 10 nM scyllatoxin (n = 5), or 10 nM apamin (n = 10); (B) DEA/NO in the absence and presence of 100 nM CTX (n = 10). Each point represents the mean ± SEM and * indicates effects significantly different from those observed in absence of the blockers (ANOVA). The spontaneous contractile activity was treated as a control. To test this effect we used scyllatoxin, a polypeptide isolated from scorpion venom, that blocks the apamin-sensitive K + channels in other tissues [20,21]. In a separate group of five experiments, scyllatoxin (10 nM), like apamin, did not alter spontaneous myometrial activity (Fig. 4). However, pretreatment of tissue with 10 nM scyllatoxin considerably reduced the DEA/NO-induced relaxation of the strips ( Fig. 2A and 5). In the presence of 10 nM scyllatoxin, the mean AUC value at 100 µDEA/NO in the bath medium was (81.63 ± 2.54)%. Thus, in presence of 10 nM scyllatoxin, the maximum of DEA/NO-induced relaxation of the myometrium strips was about 2.6 times smaller than that recorded in the absence of the blocker. The difference was statistically significant (P < 0.05). However, the reduction of the DEA/NO-induced relaxing effect by scyllatoxin was, lower than that observed after pretreatment with apamin After pretreatment with 10 nM scyllatoxin the AUC value calculated for 100 µM DEA/NO (81.63 ± 2.54%) significantly differed from that calculated in presence of 10 nM apamin (97.13 ± 2.73%). Discussion and conclusions The present data show that DEA/NO causes concentration-dependent decrease of AUC, amplitude and frequency of the myometrium from non-pregnant women. The blockers of both SK and BK channels reduce the DEA/NOinduced inhibition of spontaneous activity of the myometrium. Relaxation of many smooth muscles by NO donors involves activation of K + current resulting in hyperpolarisation of cell membrane. K + channels may be activated by pathways involving direct action by NO and/or cGMPmediated mechanisms [26,27]. In majority of smooth muscles, the relaxing effect of NO is related to opening of large-conductance Ca 2+ and voltage dependent K + channels blocked by charybdotoxin and iberiotoxin (BK). Present data show that, in the myometrium from nonpregnant women, the relaxation of spontaneous contractions induced by DEA/NO is inhibited by charybdotoxin. The same effect has been observed before [1]. Thorough analysis revealed, however, that although CTX completely inhibited the DEA/NO-induced decrease of amplitude it was less efficient in preventing the lowering of frequency. Using contraction as the only indicator of the DEA/NO influence on the myometrium activity, we can only speculate about a mechanism of the observed effect. The concentration of CTX used in the experiments inhibits about 80% of potassium current through BK channels [32]. That means that a fraction of this channels could remain unblocked, accessible to NO donated by DEA/NO. The fact that only the frequency of contractions is sensitive to activation of this fraction may indicate that, in presence of 100 nM CTX, other types of K + channels play predominant role in controlling the amplitude of contractions. It also implies that NO donated by DEA/NO inhibits the spontaneous contractile activity reducing excitability of the myometrium cells by taking their membrane potential away from the threshold for action potential generation. Our data indicate that also apamin and scyllatoxin, blockers of small conductance K + channels can counteract DEA/ NO-induced relaxation. Apamin is a blocker of Ca 2+ sensitive K + channels with small conductance [28]. In different smooth muscles the maximum effective concentrations of apamin are within the range 1 nM to 1 µM [14,29,30]. The lack of NO-induced inhibition of contractile activity that we observed in the presence of 10 nM apamin suggests the existence of apamin-sensitive K + channels in cell membrane of the myometrium from nonpregnant women. On the other hand, in some smooth muscle preparations from animals intermediate conductance K + channels exist that are sensitive to both CTX and apamin [14,15]. The similar effects of CTX and apamin on the relaxation caused by DEA/NO suggest that the same channel may be a target for both blockers. Such a conclusion, however, is inconsistent with data obtained in presence of scyllatoxin, a blocker of Ca 2+ -activated K + channels with small conductance [21] that has no effect on intermediate or large conductance, Ca 2+ -activated K + channels [31]. The decrease of NO-induced inhibition in presence of scyllatoxin shown here supports the suggestion that Ca 2+ -activated K + channels with small conductance exist in the myometrium from non-pregnant women. The difference between effects of equimolar concentrations of apamin and scyllatoxin is in agreement with the fact that the scyllatoxin affinity to the SK channels is 10 -20 lower than that of apamin [30,32]. Our findings, however, are not in agreement with the data reported by others. Perez et al. [11] using cell membranes from myometrium from non-pregnant women that were incorporated into lipid bilayer have founded no apaminsensitive K + currents in this preparation. The lack of sensitivity to apamin was also observed in a beta-subunit of maxi KCa channel from human myometrium expressed in Xenopus laevis oocytes [33]. The discrepancy between our findings and the electrophysiological observations [11,33] may be explained by assuming that exposure to NO or metabolic activation is required to activate the apamin-sensitive K + channels. It has been observed that the transfer of ionic channels to the artificial environment resulted in an inactivation of these channels [34,35]. The data of the present study strongly suggest that, the apamin-sensitive K + channels exist in the myometrium from non-pregnant women. On the basis of our data we cannot, however, preclude that in myometrium from non-pregnant women exist channels sensitive to both CTX and apamin similar to those reported in some smooth muscles [14,15]. Further studies are necessary to verify this hypothesis. We have previously shown that K + ATP channel openers are potent inhibitors of contractile responses of the myometrium of non-pregnant women induced by vasopressin, an agent implicated in the pathophysiology of dysmenorrhoea [6]. Specific openers of apamin sensitive K + channels, if developed, should have strong potential in the treatment of dysmenorrhoea.
2014-10-01T00:00:00.000Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "66b39b04804f2078422c419459b3f507d5b05993", "oa_license": "CCBY", "oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/1477-7827-1-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66b39b04804f2078422c419459b3f507d5b05993", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
163164916
pes2o/s2orc
v3-fos-license
Cardiometabolic importance of 1-h plasma glucose in obese subjects Background/objectives To study the importance and clinical usefulness of the 1-h plasma glucose (1hPG) in a Caucasian obese population with regard to the presence of prediabetes, diabetes, and metabolic syndrome (MetS). Subjects/methods We conducted a cross-sectional study of 2439 overweight or obese subjects. All received an oral glucose tolerance test (OGTT) using the American Diabetes Association criteria. ROC-curves were used to compare the sensitivity and (1-specificity) of 1hPG versus FPG and 2hPG to diagnose prediabetes and diabetes. Results Of 2439 patients (72.1% female) (age 43 ± 13 years, BMI 37.9 (34.6–41.6) kg/m2), 1262 (51.7%) had a 1hPG ≥ 155 mg/dL. The prevalence of prediabetes was 33.8% and of diabetes 9.8%. In these 240 diabetic patients, only 1.6% (four patients) did not show a 1hPG ≥ 155 mg/dL. Subjects with 1hPG ≥ 155 mg/dL were more insulin resistant (p < 0.001), had a higher waist (p < 0.001), visceral adipose tissue (VAT) (p < 0.001), systolic blood pressure (p < 0.001), microalbuminuria (p < 0.001), PAI-1 (p < 0.001), and worse lipid profile (p < 0.001) than subjects with 1hPG < 155 mg/dL. MetS was present in 64.1% of subjects with 1hPG ≥ 155 mg/dL versus 42.5% of subjects with 1hPG < 155 mg/dL (p < 0.001). In the group with 1hPG ≥ 155 mg/dL 32.6% had a normal glucose tolerance (NGT), 48.9% had prediabetes, and 18.5% was diagnosed with T2DM compared to 81.7% NGT, 17.7% prediabetes, and 0.6% T2DM in subjects with 1hPG < 155 mg/dL (p < 0.001). Among NGT subjects, 30.0% had a 1hPG ≥ 155 mg/dL and showed higher HOMA-IR (p = 0.008), VAT (p < 0.001), blood pressure (p < 0.001), and worse lipid profile (p = 0.001). Compared to 1hPG < 155 mg/dL, the sensitivity and specificity of 1hPG ≥ 155 mg/dL of prediabetes were 74.8% and 60.0% and for diabetes 97.1% and 53.2%, respectively. Conclusions This study supports the role of 1hPG value as a valuable tool in the detection of obese subjects at high risk for T2DM and MetS. Introduction Type 2-diabetes mellitus (T2DM) is increasingly prevalent and is associated with an increase in multimorbidity and mortality 1,2 . Therefore, screening and initiation of treatment are crucial 1,3 . For individuals at high risk of T2DM, including the obese population 4 , modifications in lifestyle, pharmacological interventions, and gastric bypass surgery can lower the incidence of T2DM and its complications [4][5][6] . Traditionally, preventive counseling is launched after detecting prediabetes, defined as impaired fasting glucose (IFG) and/or impaired glucose tolerance (IGT). However, 40% of patients suffering from T2DM show normal glucose tolerance (NGT) at their first oral glucose tolerance test (OGTT) 1,3,7,8 . Following its worldwide standardization 2,9 , the HbA1c was accepted as a diagnostic test for diabetes in 2010 by the American Diabetes Association (ADA). HbA1c is a very stable parameter, convenient for patients and medical staff, but its cost and the influence of other medical conditions on its level, makes HbA1c less attractive as a screening tool. In specific ethnic or geographic populations with NGT, a 1-h plasma glucose (1hPG) with a cut-off value of 155 mg/dL during OGTT was shown to be a valuable risk factor for the development of prediabetes and T2DM, respectively 1,3,9,10 . Indeed, 1hPG during OGTT might offer practical advantages over 2-h plasma glucose values (2hPG). Moreover, cross-sectional studies indicated that subjects with increased 1hPG showed an increased risk for metabolic syndrome (MetS) and cardiovascular diseases 3,11,12 . The aim of this study was to assess the value of a 1hPG in relation to the presence of prediabetes, diabetes, and MetS in a Caucasian obese population. Furthermore, we investigated the diagnostic sensitivity and practical use of the fasting plasma glucose (FPG) combined with the 1hPG, compared to the FPG combined with the 2hPG. Finally, we assessed the use of 1hPG among subjects with so called NGT as a suitable screening tool for MetS and cardiovascular risk factors. Participants Patients visiting the obesity clinic at the Antwerp University Hospital for a problem of overweight or obesity were included. None of these patients were involved in a weight reduction program at the time of enrollment. Every patient underwent a standard metabolic work-up, approved by the Ethics Committee of the Antwerp University Hospital and provided written informed consent. Inclusion of patients was based on age (≥18 years), completion of an OGTT and having a body mass index (BMI) ≥ 25 kg/m 2 . Patients with an established diagnosis of diabetes were excluded. Collection of data A metabolic work-up, including a clinical examination with anthropometry, was performed in fasting conditions. BMI was calculated as weight (measured with digital scale to 0.2 kg) over height (measured to 0.5 cm) squared. Waist circumference was measured between the lower rib margin and the iliac crest, while hip circumference was measured at the trochanter major's level. Waist-hip ratio (WHR) was calculated dividing waist circumference by hip circumference. Bio-impedance analysis, as described by Lukaski et al. 13 , was used to determine body composition. Fat mass (FM%) was calculated using the formula of Deurenberg et al. 14 . Cross-sectional areas of total abdominal adipose tissue (TAT), visceral abdominal adipose tissue (VAT), and subcutaneous abdominal adipose tissue (SAT) were measured by computerized tomography (CT) at L4-L5 level according to previously described methods 15 . Classification of patients Based on the criteria determined by the ADA, subjects were classified as NGT with FPG < 100 mg/dL associated with 2hPG < 140 mg/dL and HbA1c < 5.7%. Subjects were classified as IFG with FPG between 100 and 125 mg/dL, while subjects were classified as IGT with 2hPG between 140 mg/dL and 199 mg/dL, and/or HbA1c between 5.7% and 6.4%. Individuals with IFG and/or IGT were being referred to as having prediabetes 2 . Subjects referred to as probable diabetes were FPG ≥ 126 mg/dL and 2hPG ≥ 200 mg/dL or HbA1c ≥ 6.5%. Criteria for having diabetes were FPG ≥ 126 mg/dL and 2hPG ≥ 200 mg/dL and HbA1c ≥ 6.5% 2,18 . Statistical analysis All data were analyzed using statistical package for the social sciences (SPSS 21.0) software. Normality was checked using the Kolmogorov-Smirnov test. Variables that were not normally distributed were log transformed or square rooted when appropriate. Anthropometric measurements were presented as mean values with their standard deviation (SD) for normally distributed variables and median values with percentile 25 (P25) and percentile 75 (P75) for not normally distributed variables. Categorical variables were tested with the chi-squared (χ 2 ) test; differences in continuous variables were tested using independent sample Student's t test (parametric variables) or Mann-Whitney U test (nonparametric variables). To test variables between more than two independent groups, one-way-Anova (parametric variables) or Kruskal-Wallis (nonparametric variables) was used when appropriate. To investigate which subgroups were significantly different from each other, a Tukey post hoc analysis, independent Student's t test, or Mann-Whitney U test were performed as appropriate. Differences were considered significant at values of p < 0.05. Linear regression with studentized residuals was used to test whether differences were independent of influencing factors. Results were considered significant if p < 0.05. A multivariable linear stepwise regression analysis was used to examine parameters of the MetS associated with 1hPG. Collinearity diagnostics were performed to rule out independent association among variables. Receiver operating characteristics (ROC) curves were used to compare the diagnostic sensitivity and (1-specificity) of 1hPG versus FPG and 2hPG to diagnose prediabetes and diabetes. ROC-curves were also used to calculate sensitivity and (1-specificity) for different cut-off values. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for prediabetes and diabetes were calculated. Given the significant difference in age and gender between those with normal and elevated 1hPG in NGT subjects, analyses were repeated correcting for these two factors ( Table 2). Discussion This cross-sectional study reports on an obese Caucasian population, having undergone an OGTT during a metabolic workup, and stratified subjects based on ADA criteria. The 1hPG is rarely estimated or taken into account. The discrepancy about the 1hPG literature led to a search for its added value in screening for diabetes 22 .To the best of our knowledge, we are the first to study the added value of the 1hPG value in an obese population. In specific ethnic and geographic populations, studies demonstrated the value of 1hPG with a cut-off value of 155 mg/dL during OGTT as a valuable predictor for progression to prediabetes and T2DM, respectively 1,3,9,10 . In a study by Priya et al. 3 , non-obese subjects with 1hPG ≥ 155 mg/dL were diagnosed with diabetes after a follow up of at least a year, while Fiorentino et al. 10 , made this conclusion in a five year follow up study 10 and Abdul Ghani et al. 1 after at least 7-year follow up 1 . This is a timeframe in which development of T2DM can be postponed or avoided through lifestyle intervention. Our ROCs revealed that for identifying the prevalence of diabetes, a 1hPG ≥ 155 mg/dL during a patient's OGTT, is a useful cut-off point. Furthermore, elevated 1hPG among obese subjects is associated with an~3 times higher incident detection of prediabetes, 30 times higher incident detection of T2DM and a 50% higher prevalence of MetS, compared to the group with normal 1hPG. These findings are in line with the study of Pareek et al. 8 , stating that subjects with 1hPG ≥ 155 mg/dL have a significantly increased risk of incident T2DM. Other studies, performed in specific settings with regard to ethnicity and geography, have also demonstrated the importance of 1hPG during an OGTT, suggesting that elevated 1hPG could facilitate clinicians in earlier detection of adults at risk for MetS, cardiovascular disease, and future T2DM 3,9,20,23,24 . Further analyses of metabolic risk factors in our study confirmed that they were independently associated with a 1hPG ≥ 155 mg/dL. Moreover, International Diabetes Federation highlighted other parameters, appearing to be linked with MetS, which can be considered as additional criteria in predicting T2DM and/or cardiovascular disease. Those parameters are insulin resistance (HOMA-IR), microalbuminuria and "prothrombotic state" (to be measured via fibrinolytic factors such as PAI-1) 19,25,26 . Moreover, Succurro et al. 27 added that NGT subjects with 1hPG > 155 mg/dL have an atherogenic profile similar to IGT subjects and supports that 1hPG > 155 mg/dL may be considered to identify individuals at risk for cardiovascular disease 27 . Screening and prevention of T2DM can be helped by assessment of IR, although cut-off values also are specific depending on race, age, gender, etc. PAI-1 is considered to be an important indicator of cardiovascular risk and strongly related to MetS, while microalbuminuria is used as an important marker for detection of renal dysfunction. Our study shows that there is a significant difference between subjects with a normal and an elevated 1hPG, with respect to cardiometabolic profile. When we focus on subjects with NGT, elevated 1hPG levels were associated with elevated HOMA-IR, presence of microalbuminuria and a worse lipid profile and a significant trend was found for PAI-1. Our results confirm those of the study of Abdul Ghani et al. 9 , who observed that the prevalence of MetS in people with NGT was 14.3%. In our study, we noticed a 15% higher prevalence of MetS in subjects with NGT, but an elevated 1hPG compared to subjects with a NGT and a normal 1hPG. However, published studies were population based, while we selected patients from an obesity clinic with a higher mean BMI. It is noteworthy that 1hPG is a convenient measure in an obese population to detect subjects at risk for MetS and cardiovascular disease, as it does not necessitate 2-h values 1,3,[8][9][10]12 . Indeed, this study shows that interpreting an OGTT without taking into account 2hPG value only 1.6% (four patients) of diabetes diagnosis, based on OGTT criteria, would have been missed. On the other hand, all patients diagnosed with diabetes based on HbA1c values, showed an elevated 1hPG, while 18 patients did not show an elevated 2hPG. Therefore a shorter 75 g OGTT with FPG and 1hPG estimation can reduce the workload among nurses without significant risk of misdiagnosis. This seems to be in line with Fiorentino et al. 28 who stated that a 1hPG > 155 mg/dL may be a useful tool to identify a subset of individuals within HbA1c-defined glycemic categories at higher risk of developing type 2 diabetes. A major strength of this study is that it consists of 2439 well-characterized subjects, and this can be considered a large cohort. There was no significant influence from other medical conditions. A relative limitation is the small number of men analyzed in this study. The subjects included in this study were referred by their general practitioner or came at their own initiative and it is well known that women tend to seek help for weight problems Comparing probably diabetes to diabetes (p < 0.05) more often and earlier than men. However, this finding does not alter the importance and conclusions of the observations. A final limitation is the cross-sectional study nature of this study. It would be interesting to organize a long-term follow-up to improve the prediction model. Conclusion This study illustrates the clinical importance of a 1-h glucose determination in obese subjects. ROC analysis confirmed 155 mg/dL as a useful cut point to diagnose prediabetes or diabetes in obese patients. Moreover, even in subjects with a normal fasting glycemia, an elevated Fig. 1 Receiver operating characteristic curve for he discriminatory ability of 1hPG values with respect to diabetes and prediabetes, based on OGTT diabetes criteria. a The ROC curve for FPG, 1hPG, and 2hPG to discriminate individuals with diabetes. b The ROC curve for FPG, 1hPG, and 2hPG to discriminate individuals with prediabetes, after excluding T2DM. The AUC for 2hPG (prediabetes, 0.837; diabetes, 0.982) was greater compared to 1hPG ≥ 155 mg/dL (prediabetes, 0.700; diabetes, 0.924) and FPG (prediabetes, 0.629; diabetes, 0.898) Fig. 2 Receiver operating characteristic curve for the discriminatory ability of 1hPG values with respect to diabetes and prediabetes, based on HbA1c-levels. a The ROC curve for FPG, 1hPG, and 2hPG to discriminate individuals with diabetes. b The ROC curve for FPG, 1hPG, and 2hPG to discriminate individuals with prediabetes, after excluding T2DM. For diabetes (a), the AUC for FPG (0.935) was greater compared to 1hPG ≥ 155 mg/dL (0.933) and 2hPG (0.656). For prediabetes (b), the AUC for 2hPG (0.937) was greater followed by 1hPG ≥ 155 mg/dL (0.688) and FPG (0.669) 1hPG can discriminate subjects having a worse cardiometabolic risk profile. As such, this group can be considered as a new target group for early intervention.
2019-05-25T13:51:49.048Z
2019-05-24T00:00:00.000
{ "year": 2019, "sha1": "8a6773fadd9a4f4a92180391591117044adac00f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41387-019-0084-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a6773fadd9a4f4a92180391591117044adac00f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239957639
pes2o/s2orc
v3-fos-license
3-O-Carbamoyl-5,7,20-O-trimethylsilybins: Synthesis and Preliminary Antiproliferative Evaluation To search for novel androgen receptor (AR) modulators for the potential treatment of castration-resistant prostate cancer (CRPC), naturally occurring silibinin was sought after as a lead compound because it possesses a moderate potency towards AR-positive prostate cancer cells and its chemical scaffold is dissimilar to all currently marketed AR antagonists. On the basis of the structure–activity relationships that we have explored, this study aims to incorporate carbamoyl groups to the alcoholic hydroxyl groups of silibinin to improve its capability in selectively suppressing AR-positive prostate cancer cell proliferation together with water solubility. To this end, a feasible approach was developed to regioselectively introduce a carbamoyl group to the secondary alcoholic hydroxyl group at C-3 without causing the undesired oxidation at C2–C3, providing an avenue for achieving 3-O-carbamoyl-5,7,20-O-trimethylsilybins. The application of the synthetic method can be extended to the synthesis of 3-O-carbamoyl-3′,4′,5,7-O-tetramethyltaxifolins. The antiproliferative potency of 5,7,20-O-trimethylsilybin and its nine 3-carbamoyl derivatives were assessed in an AR-positive LNCaP prostate cancer cell line and two AR-null prostate cancer cell lines (PC-3 and DU145). Our preliminary bioassay data imply that 5,7,20-O-trimethylsilybin and four 3-O-carbamoyl-5,7,20-O-trimethylsilybins emerge as very promising lead compounds due to the fact that they can selectively suppress AR-positive LNCaP cell proliferation. The IC50 values of these five 5,7,20-O-trimethylsilybins against the LNCaP cells fall into the range of 0.11–0.83 µM, which exhibit up to 660 times greater in vitro antiproliferative potency than silibinin. Our findings suggest that carbamoylated 5,7,20-O-trimethylsilybins could serve as a natural product-based scaffold for new antiandrogens for lethal castration-resistant prostate cancer. Introduction Castration-resistant prostate cancer (CRPC) is a lethal version of prostate cancer that continues to deprive the lives of over 30,000 American men per year [1]. The androgen receptor (AR)-regulated gene expression still holds the foremost impetus for the progression of CRPC [2]. In addition to acquiring drug resistance with the time of treatment, a considerable portion of patients with CRPC are primarily resistant to the current FDAapproved treatments targeting the AR-signaling axis as evidenced by their hazard ratios for the primary end points in the phase III trials [3]. Consequently, novel potential therapeutic strategies for CRPC, including poly(adenosine diphosphate (ADP)-ribose)polymerase inhibitors [4,5], androgen receptor degraders [6,7], and CAR-T cell therapy [8], are emerging. As part of our ongoing project to discover new AR modulators with dissimilar chemical scaffolds for the potential treatment of CRPC, this study picks up naturally occurring silibinin (1, also known as silybin, Figure 1) as a lead compound because it has a chemical scaffold completely different from all marketed AR antagonists and possesses moderate potency towards AR-positive prostate cancer cells. Silibinin (1) is the most abundant tagonists and possesses moderate potency towards AR-positive prostate cancer cells. Silibinin (1) is the most abundant flavonolignan of silymarin, a well-known traditional European medicine and the crude extract from milk thistle (Silybum marianum) [9]. Its initial name is silybin because it was considered as a single compound in 1968 [9] and later silibinin was recommended as an alternative name to highlight the fact that it is a diastereomeric mixture of silybin A and silybin B [10]. Throughout this paper, silibinin is used to stand for the commercially available mixture of silybin A and silybin B. In addition to having appreciable potential in treating prostate cancer on the ground of its in vitro and in vivo experimental data [11][12][13], silibinin (1) acts as an anti-prostate cancer agent with mechanisms of action that are associated with the AR-signaling axis as summarized in our review article [14]. Specifically, silibinin can inhibit the secretion of prostate-specific antigens (PSA) [11,[15][16][17], lower the AR level, prevent AR nuclear localization, and promote AR degradation [11,16,18]. Its non-toxic profiles, which have been corroborated by its long-term dietary application along with its clinical trial data, make it even more alluring as a lead compound [19]. However, its moderate potency, poor selectivity, and poor bioavailability pose significant impediments to clinical applications. Chemical manipulations have been verified by us and others to be a viable strategy to boost its potency [20][21][22][23][24] and to improve its pharmacokinetic profiles [25,26]. The earlier studies on the structure-activity relationships of silibinin (1) and 2,3-dehydrosilybin (2) from our laboratories revealed that chemical modification on silibinin (1) led to significantly higher selectivity in the proliferation inhibition of AR-positive LNCaP cells vs. AR-null PC-3 and DU145 cells when compared with the same chemical modification on 2,3-dehydrosilybin (2) [20][21][22]. Our recent investigation suggests that modification of the alcoholic hydroxyl group at C-23 of 3,5,7,20-O-tetramethyl-2,3-dehydrosilybin resulted in appreciably higher selectivity in suppressing AR-positive LNCaP prostate cancer cell proliferation compared with modification of the phenolic hydroxyl groups [27]. These results prompted us to aim for the appropriately designed substituents at alcoholic hydroxyl groups at C-3 and C-23 for 5,7,20-O-trimethylsilybin (3) in hopes to enhance the antiproliferative potency and selectivity towards AR-positive prostate cancer cells. Structure manipulations on the alcoholic hydroxyl groups at C-3 and C-23 have been reported to yield silybin 3,23-bis-O-hemisuccinate and 23-phosphodiester silybin with improved pharmacokinetic profiles compared with silibinin [25,26]. However, these derivatives do not show significant improvement in antiproliferative potency against prostate cancer cells (both LNCaP and PC-3 cell lines). The carbamate-incorporated compounds have been proven to possess sufficient water solubility and improved biological activity [28,29]. The carbamate derivatives of 5,7,20-O-trimethylsilybin (3) have thus been designed to fine-tune the alcoholic hydroxyl groups at C-23 and C-3 in hopes to simultaneously increase the potency, selectivity, and aqueous solubility. In this paper, a regioselective synthesis of 3-O-carbamoyl-5,7,20-O-trimethylsilybins, along with the antiproliferative potency towards AR-containing and AR-null prostate cancer cell lines, are presented. The earlier studies on the structure-activity relationships of silibinin (1) and 2,3dehydrosilybin (2) from our laboratories revealed that chemical modification on silibinin (1) led to significantly higher selectivity in the proliferation inhibition of AR-positive LNCaP cells vs. AR-null PC-3 and DU145 cells when compared with the same chemical modification on 2,3-dehydrosilybin (2) [20][21][22]. Our recent investigation suggests that modification of the alcoholic hydroxyl group at C-23 of 3,5,7,20-O-tetramethyl-2,3-dehydrosilybin resulted in appreciably higher selectivity in suppressing AR-positive LNCaP prostate cancer cell proliferation compared with modification of the phenolic hydroxyl groups [27]. These results prompted us to aim for the appropriately designed substituents at alcoholic hydroxyl groups at C-3 and C-23 for 5,7,20-O-trimethylsilybin (3) in hopes to enhance the antiproliferative potency and selectivity towards AR-positive prostate cancer cells. Structure manipulations on the alcoholic hydroxyl groups at C-3 and C-23 have been reported to yield silybin 3,23-bis-O-hemisuccinate and 23-phosphodiester silybin with improved pharmacokinetic profiles compared with silibinin [25,26]. However, these derivatives do not show significant improvement in antiproliferative potency against prostate cancer cells (both LNCaP and PC-3 cell lines). The carbamate-incorporated compounds have been proven to possess sufficient water solubility and improved biological activity [28,29]. The carbamate derivatives of 5,7,20-O-trimethylsilybin (3) have thus been designed to fine-tune the alcoholic hydroxyl groups at C-23 and C-3 in hopes to simultaneously increase the potency, selectivity, and aqueous solubility. In this paper, a regioselective synthesis of 3-Ocarbamoyl-5,7,20-O-trimethylsilybins, along with the antiproliferative potency towards AR-containing and AR-null prostate cancer cell lines, are presented. could be a challenge due to the fact that an oxidation at the C-2, C-3 position can readily occur under the basic conditions [30] and the typical bases employed for the carbamoylation of the alcoholic hydroxyl groups are NaH or KH [31]. Encouraged by the successful preparation of 7-O-benzylsilybin and 5,7,20-O-trimethylsilybin mediated by potassium carbonate under strictly anaerobic conditions [21], we initially attempted to synthesize 3,23-O-dicarbamoyl-5,7,20-O-trimethylsilybin (4) by treating 5,7,20-O-trimethylsilybin (3) [21] with N,N-dimethylcarbamoyl chloride using NaH as a base under strictly anaerobic conditions (Scheme 1). Unfortunately, the TLC plates from our several trials showed this reaction was very messy and did not yield the desired carbamate 4, suggesting the starting material 3 (5,7,20-O-trimethylsilybin) was decomposed under the strong basic conditions [32]. Synthesis So far, no 3-O-carbamoyl derivative of flavanonol-based flavonolignans has yet been reported. The synthesis of 3-O-carbamoyl derivatives of flavanonol-based flavonolignans could be a challenge due to the fact that an oxidation at the C-2, C-3 position can readily occur under the basic conditions [30] and the typical bases employed for the carbamoylation of the alcoholic hydroxyl groups are NaH or KH [31]. Encouraged by the successful preparation of 7-O-benzylsilybin and 5,7,20-O-trimethylsilybin mediated by potassium carbonate under strictly anaerobic conditions [21], we initially attempted to synthesize 3,23-O-dicarbamoyl-5,7,20-O-trimethylsilybin (4) by treating 5,7,20-O-trimethylsilybin (3) [21] with N,N-dimethylcarbamoyl chloride using NaH as a base under strictly anaerobic conditions (Scheme 1). Unfortunately, the TLC plates from our several trials showed this reaction was very messy and did not yield the desired carbamate 4, suggesting the starting material 3 (5,7,20-O-trimethylsilybin) was decomposed under the strong basic conditions [32]. At this point, we revisited the literature and searched for weaker organic bases for this carbamoylation. It has been reported that carbamates could be achieved in good yield by refluxing alcohols with N,N-dimethylcarbamoyl chloride in pyridine [33]. These conditions are not appropriate for the synthesis of 3-O-carbamoylsilybin because heating silibinin at 80−90 °C in dry pyridine in the presence of air leads to the oxidation at C2-C3 [34]. A secondary benzylic alcohol was reported to be converted to the corresponding carbamate by treating it with a bulky carbamoyl chloride mediated by Et3N in DCM [35,36]. By integrating these two carbamoylation methods, we decided to prepare 3,23-O-dicarbamoyl-5,7,20-O-trimethylsilybin by treating 5,7,20-O-trimethylsilybin with N,N-dimethylcarbamoyl chloride (4 eq) in DCM (0.1 M) using triethylamine (4 eq) and 4-(N,N-dimethylamino)pyridine (DMAP,1 eq) at room temperature for 16 h (Scheme 2). Surprisingly, the carbamoylation regioselectively occurs at the secondary alcohol at C-3 in the presence of the primary alcohol at C-23 when treating 5,7,20-O-trimethylsilybin (3) with N,N-dimethylcarbamoyl chloride using triethylamine and DMAP as bases. This unexpected regioselective reaction has been repeated over forty times by two co-authors, and the structure of the 3-O-carbamoyl-5,7,20-O-trimethylsilybin (5) has been confirmed by 2D-NMR data and HRMS (see Structure Determination). Removal of DMAP resulted in no carbamoylation reaction, and the minimum amount of DMAP required for the completion of N,N-dimethylcarbamoylation of 5,7,20-O-trimethylsilybin (3) was 0.5 equivalents, suggesting an appropriate amount of DMAP is crucial for facilitating the carbamoylation. At this point, we revisited the literature and searched for weaker organic bases for this carbamoylation. It has been reported that carbamates could be achieved in good yield by refluxing alcohols with N,N-dimethylcarbamoyl chloride in pyridine [33]. These conditions are not appropriate for the synthesis of 3-O-carbamoylsilybin because heating silibinin at 80−90 • C in dry pyridine in the presence of air leads to the oxidation at C2-C3 [34]. A secondary benzylic alcohol was reported to be converted to the corresponding carbamate by treating it with a bulky carbamoyl chloride mediated by Et 3 N in DCM [35,36]. By integrating these two carbamoylation methods, we decided to prepare 3,23-O-dicarbamoyl-5,7,20-O-trimethylsilybin by treating 5,7,20-O-trimethylsilybin with N,N-dimethylcarbamoyl chloride (4 eq) in DCM (0.1 M) using triethylamine (4 eq) and 4-(N,N-dimethylamino)pyridine (DMAP,1 eq) at room temperature for 16 h (Scheme 2). Surprisingly, the carbamoylation regioselectively occurs at the secondary alcohol at C-3 in the presence of the primary alcohol at C-23 when treating 5,7,20-O-trimethylsilybin (3) with N,N-dimethylcarbamoyl chloride using triethylamine and DMAP as bases. This unexpected regioselective reaction has been repeated over forty times by two co-authors, and the structure of the 3-O-carbamoyl-5,7,20-O-trimethylsilybin (5) has been confirmed by 2D-NMR data and HRMS (see Structure Determination). Removal of DMAP resulted in no carbamoylation reaction, and the minimum amount of DMAP required for the completion of N,N-dimethylcarbamoylation of 5,7,20-O-trimethylsilybin (3) was 0.5 equivalents, suggesting an appropriate amount of DMAP is crucial for facilitating the carbamoylation. Structure Determination of 3-O-Carbamoyl-5,7,20-O-trimethylsilybin 5 The structure of 5,7,20-O-trimethyl-3-O-(N,N-dimethylcarbamoyl)-silybin (5) was elucidated by interpreting its 1D-and 2D-NMR data (Table 2), as well as high resolution MS and IR data. The structure of 5 was characterized by the existence of one signal at 2.85 ppm representing 6 protons in its 1 H NMR spectrum (Supplementary Materials) and at 36.75 (36.08) and 155.28 in its 13 C NMR spectrum for an additional dimethylcarbamoyl group when compared with the starting material 5,7,20-O-trimethylsilybin (3), which was corroborated by the HRMS data. The dimethylcarbamoyl group in 5 was assigned to 3-OH based on the fact that the H-3 signal is downshifted from 4.42 ppm to 5.51 (5.49) ppm relative to 5,7,20-O-trimethylsilybin (3) [21]. The carbamoylation at 3-OH was further confirmed by the critical HMBC correlations from the signal at δ H 5.51 (5.49) (H-3) to the signal at δ C 155.28 (carbonyl carbon from the dimethylcarbamoyl group, Figure 2). It is worth noting that the starting material silibinin (1) (purchased from Fisher Scientific) was a nearly equimolar diastereoisomeric mixture of silybin A and silybin B. The carbamoyled derivatives of 5,7,20-O-trimethylsilybin are thus diastereoisomeric mixtures as well, and some NMR signals of these derivatives therefore appear as a pair (see Table 2 and Materials and Methods). The applicability of the method was also evaluated in the open-chain α-hydroxyl ketones. Acyclic 3-hydroxy-2-butanone (18) was selected and treated with the respective (thio)carbamoyl chloride (4 eq) in the presence of triethylamine (4 eq) and DMPA (1 eq) in DCM (0.1 M). The 1 H NMR analysis (Figure 3) of the characteristic H-3 in the crude product indicated that 54% and 17% of 3-hydroxy-2-butanone (18) can be converted to the corresponding N,N-dimethylcarbamoyl product (19) and N,N-diethylcarbamoyl product (20), respectively (Scheme 5). The overall conversion rates for the open-chain α-hydroxyl ketones are lower than those for the cyclic α-hydroxyl ketones, and no car- 36 Structure Determination of 3-O-Carbamoyl-5,7,20-O-trimethylsilybin 5 The structure of 5,7,20-O-trimethyl-3-O-(N,N-dimethylcarbamoyl)-silybin (5) was elucidated by interpreting its 1D-and 2D-NMR data (Table 2), as well as high resolution MS and IR data. The structure of 5 was characterized by the existence of one signal at 2.85 ppm representing 6 protons in its 1 H NMR spectrum (Supplementary Materials) and at 36.75 (36.08) and 155.28 in its 13 C NMR spectrum for an additional dimethylcarbamoyl group when compared with the starting material 5,7,20-O-trimethylsilybin (3), which was corroborated by the HRMS data. The dimethylcarbamoyl group in 5 was assigned to 3-OH based on the fact that the H-3 signal is downshifted from 4.42 ppm to 5.51 (5.49) ppm relative to 5,7,20-O-trimethylsilybin (3) [21]. The carbamoylation at 3-OH was further confirmed by the critical HMBC correlations from the signal at δH 5.51 (5.49) (H-3) to the signal at δC 155.28 (carbonyl carbon from the dimethylcarbamoyl group, Figure 2). It is worth noting that the starting material silibinin (1) (purchased from Fisher Scientific) was a nearly equimolar diastereoisomeric mixture of silybin A and silybin B. The carbamoyled derivatives of 5,7,20-O-trimethylsilybin are thus diastereoisomeric mixtures as well, and some NMR signals of these derivatives therefore appear as a pair (see Table 2 and Materials and Methods). Scope of the Reaction Approach The selectivity may be caused by the hydrogen bonding between the hydrogen of the alpha-OH and the 4-carbonyl oxygen atom. To further confirm that the secondary 3-OH The applicability of the method was also evaluated in the open-chain α-hydroxyl ketones. Acyclic 3-hydroxy-2-butanone (18) was selected and treated with the respective (thio)carbamoyl chloride (4 eq) in the presence of triethylamine (4 eq) and DMPA (1 eq) in DCM (0.1 M). The 1 H NMR analysis (Figure 3) of the characteristic H-3 in the crude product indicated that 54% and 17% of 3-hydroxy-2-butanone (18) can be converted to the corresponding N,N-dimethylcarbamoyl product (19) and N,N-diethylcarbamoyl product (20), respectively (Scheme 5). The overall conversion rates for the open-chain α-hydroxyl ketones are lower than those for the cyclic α-hydroxyl ketones, and no carbamoylation reaction of 3-hydroxy-2-butanone with N,N-dimethylthiocarbamoyl halide under the abovementioned conditions was observed. Preliminary Anti-Proliferative Activity towards AR-Positive and AR-Null Prostate Cancer Cell Lines The preliminary in vitro antiproliferative potency of 5,7,20-O-trimethylsilybin (3) and its nine carbamoyled derivatives towards the AR-positive LNCaP prostate cancer cell line was assessed using the WST-1 cell proliferation assay according to the procedure described in the Materials and Methods section. The AR-null DU145 and PC-3 prostate cancer cell models were used to evaluate the inhibitory selectivity towards AR-positive cells over AR-null ones. Enzalutamide, a current FDA-approved second-generation AR antagonist for CRPC, and silibinin were used as positive controls for comparison. The IC50 values were calculated based on the dose-response curves and summarized in Table 3. The IC50 values Preliminary Anti-Proliferative Activity towards AR-Positive and AR-Null Prostate Cancer Cell Lines The preliminary in vitro antiproliferative potency of 5,7,20-O-trimethylsilybin (3) and its nine carbamoyled derivatives towards the AR-positive LNCaP prostate cancer cell line was assessed using the WST-1 cell proliferation assay according to the procedure described in the Materials and Methods section. The AR-null DU145 and PC-3 prostate cancer cell models were used to evaluate the inhibitory selectivity towards AR-positive cells over AR-null ones. Enzalutamide, a current FDA-approved second-generation AR antagonist for CRPC, and silibinin were used as positive controls for comparison. The IC 50 values were calculated based on the dose-response curves and summarized in Table 3 On the grounds of our preliminary bioassay data, 5,7,20-O-trimethylsilybin (3) and its four 3-substituted derivatives (4, 5, 6, and 8) emerge as very promising lead compounds due to the fact that they can selectively suppress AR-positive LNCaP cell proliferation with IC 50 values of 0.11−0.83 µM and have more efficacy than the current FDA-approved second-generation AR antagonist enzalutamide (Table 3). Our findings suggest that 3-Osubstituted-5,7,20-O-trimethylsilybin may serve as a natural product-based scaffold for new antiandrogens for lethal castration-resistant prostate cancer. General Procedures IR spectra were recorded on a Nicolet Nexus 470 FTIR spectrophotometer (Waltham, MA, USA). HRMS were obtained on an Orbitrap mass spectrometer with electrospray ionization (ESI). NMR spectra were obtained on a Bruker Fourier 300 spectrometer (Billerica, MA, USA) in CDCl 3 . The chemical shifts are given in ppm referenced to the respective solvent peak, and coupling constants are reported in Hz. The value of the central peak of the solvent was calibrated as δ = 7.26 ppm for the 1 H NMR spectrum and as δ = 77.16 ppm for the 13 C NMR spectrum, respectively. THF and dichloromethane were purified by the PureSolv MD 7 Solvent Purification System from Innovative Technologies (MB-SPS-800) (Herndon, VA, USA). All other reagents and solvents were purchased from commercial sources and were used without further purification. Silica gel column chromatography was performed using silica gel (32-63 µm). Preparative thin-layer chromatography (PTLC) separations were carried out on thin-layer chromatography plates loaded with silica gel 60 GF254 (EMD Millipore Corporation) (Berlington, MA, USA). 5,7,20-O-Trimethylsilybin (3) was synthesized from silibinin (>98%, purchased from Fisher Scientific (Portland, OR, USA)) using the procedure previously described by us [21]. The HPLC purity analyses were performed on an Agilent Hewlett-Packard 1100 series HPLC DAD system (Santa Clara, CA, USA) using a 5 µM C18 reversed phase column (4.6 × 250 mm) and a diode array detector. Solvent A was methanol and solvent B was 5% methanol in DI water. All testing samples were run for 30 min of 35−100% A in B, with a 20 min gradient. The flow rate was 1 mL/min. (1 mL, 0.1 M). The subsequent mixture was stirred for 10 min at room temperature, to which the respective (thio)carbamoyl chloride (0.4 mmol) was added. The reaction was allowed to proceed at room temperature with stirring overnight under argon prior to being quenched with brine (50 mL). The resulting mixture was extracted with ethyl acetate (30 mL × 3), the combined organic extracts were dried over anhydrous sodium sulfate, and the organic solvents were removed. The crude product was subjected to PTLC purification eluting with DCM:MeOH (95:5, v/v) to afford the respective 3-carbomoyled derivative and 3,23-dicarbomoyled derivative. Their physical and spectral data are summarized below. 13 13 13 13 13 (1 mL, 0.1 M). The subsequent mixture was stirred for 10 min at room temperature, to which the respective (thio)carbamoyl chloride (0.4 mmol) was added. The reaction was allowed to proceed at room temperature overnight under argon prior to being quenched with brine (50 mL). The resulting mixture was extracted with ethyl acetate (30 mL × 3), the combined organic extracts were dried over anhydrous sodium sulfate, and the organic solvents were removed. The crude product was subjected to PTLC purification eluting with DCM/MeOH (95:5, v/v) to afford the respective 3-carbomoyled derivative. Their physical and spectral data are summarized below. 13 13 Synthesis of Tetramethyltaxifolin (14) A reaction flask was charged with taxifolin (20 mg, 0.066 mmol) and potassium carbonate (82 mg, 0.59 mmol), which was vacuumed three times under argon before the addition of anhydrous acetone (0.47 mL, 0.14 M). Dimethylsulfate (75 µL, 0.79 mmol) was then added into the reaction flask through a long needle. The reaction solution was refluxed overnight and cooled down to room temperature before the aqueous solution of ammonium chloride (20 mL) was added to quench the reaction. The subsequent mixture was extracted with ethyl acetate (10 mL × 3), the combined organic extracts were dried over anhydrous sodium sulfate, and the organic solvents were removed under vacuum. The crude product was purified over preparative TLC plate eluting with ethyl acetate/hexane (5:1, v/v) and ethyl acetate/hexane (7:3, v/v), sequentially, to furnish the desired 14 as a slight yellow solid in 86% yield; 149-151 • C; in [37], 165-166 • C; in [38] [39]. Only one exception is that the chemical shift for H-5 was reported as 7.08 ppm in the literature [39] rather than at 6.93 ppm, without showing the original 1 H NMR spectrum. Our chemical shift at H-5 is closer to the value reported in another work [40]. General Procedure for the Synthesis of 3-Carbamoyled Derivatives of Tetramethyltaxifolin Triethylamine (56 µL, 0.4 mmoL) and 4-dimethylaminopyridine (12 mg, 0.1 mmol) were sequentially added to a solution of tetramethyltoxifolin (14) (36 mg, 0.1 mmol) in DCM (1 mL, 0.1 M). The subsequent mixture was stirred for 10 min at room temperature, to which the respective (thio)carbamoyl chloride (0.4 mmol) was added. The reaction was allowed to proceed at room temperature with stirring overnight under argon prior to being quenched with brine (50 mL). The resulting mixture was extracted with ethyl acetate (30 mL × 3), the combined organic extracts were dried over anhydrous sodium sulfate, and the organic solvents were removed. The crude product was subjected to PTLC purification eluting with hexane/EtOAc (3:7, v/v) to afford the respective 3-carbomoyled derivative. Their physical and spectral data are summarized below. WST-1 Cell Proliferation Assay The LNCaP, DU-145, or PC-3 prostate cancer cells were plated in 96-well plates at a density of 3200 in each well in 200 µL of culture medium. The cells were then treated with positive reference, or synthesized mimics separately at different doses for 3 days, while equal treatment volumes of DMSO were used as the vehicle control. The cells were cultured in a CO 2 incubator at 37 • C for three days. An amount of 10 µL of the premixed WST-1 cell proliferation reagent (Clontech) was added to each well. After mixing gently for 1 min on an orbital shaker, the cells were incubated for an additional 3 h at 37 • C. To ensure homogeneous distribution of color, it was important to mix gently on an orbital shaker for 1 min. The absorbance of each well was measured using a microplate-reader (Synergy HT, BioTek) at a wavelength of 430 nm. The IC 50 value was the concentration of each compound that inhibits cell proliferation by 50% under the experimental conditions and was the average from triplicate determinations that were reproducible and statistically significant. For calculating the IC 50 values, a linear proliferative inhibition was made based on at least five dosages for each compound. Statistical Analysis All data are represented as the mean ± standard deviation (S.D.) for the number of experiments indicated. Other differences between treated and control groups were analyzed using the Student's t-test. A p-value < 0.05 was considered statistically significant. Conclusions Collectively, the regioselective carbamoylation at the secondary alcoholic hydroxyl group at C-3 of flavanonols and flavanonol-type flavonolignans was developed for the first time. This approach avoids the oxidation of the C2-C3, offering a practical approach to the synthesis of 3-O-carbamoyl derivatives of flavanonols and flavanonol-type flavonolignans. Our preliminary WST cell proliferation assay data exhibit that 5,7,20-O-trimethylsilybin and four 3-O-carbamoyl-5,7,20-O-trimethylsilybins bear high selectivity and promising potency towards the AR-positive LNCaP prostate cancer cell line. These findings advocate that 3-O-substituted-5,7,20-O-trimethylsilybin may serve as a natural product-based scaffold for new antiandrogens for lethal castration-resistant prostate cancer. Further evaluation of their (including their optically pure versions) promising potency and selectivity in other AR-positive prostate cancer cell lines, as well as their AR modulating capability, are in progress and will be reported in due course.
2021-10-27T15:18:35.852Z
2021-10-24T00:00:00.000
{ "year": 2021, "sha1": "d216b9e1c68a9fa77f9449286afbe97c87da3acb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/21/6421/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c96ddffccd8e965ee6a51027d6a21ace085e4a98", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248936879
pes2o/s2orc
v3-fos-license
Friction spot brazing of stainless steel to titanium (grade 1) using aluminum foil This paper demonstrates a simple and effective technique for friction spot brazing of stainless steel (st37) to titanium (grade 1). We use aluminum foil as a filler that is placed between the base metals. We evaluate the joints when using different rotational speeds: 1800, 2000, 2200, 2400, 2600, and 2800 RPM. We characterize the joint using scanning electron microscopy, optical microscopy, energy dispersive x-ray analysis, the microhardness of cross-sections, and fractography. We found that the strongest tensile strength joint (6 kN) come from friction spot brazing at 2000 RPM. The joint interface of the 2000 RPM sample contains intermetallic compounds such as FeTi, Fe3Al, FeAl2, and Ti3Al, which increases the tensile strength. Introduction Welding titanium requires the use of inert gas to avoid oxidation. Common welding techniques used for joining Titanium to other base metals, such as TiG and MiG, use Argon gas to avoid this problem. These welding techniques are expensive and require complex tools [1,2]. Mechanical joining techniques such as friction brazing offers an alternative that is inexpensive, uses simpler tools and does not require the use of inert gases [3]. Traditional titanium welding techniques are expensive and require the use of complex tools to avoid the problem of oxidation. Common welding techniques used for joining Titanium to other base metals, such as TiG and MiG [4], use Argon gas to avoid oxidizing the Titanium metal [5]. These welding techniques are expensive and use complex tools that require the use of inert gas [6]. Mechanical joining techniques offer a simpler and cost-effective solution for joining Titanium to other alloys [7]. Several popular methods for mechanically joining materials include friction welding [8], ultrasonic welding [9], and friction-based joints [10]. Among these solid-state joining methods, friction spot brazing is a desirable process because it joins the metals below their melting temperatures [11]. In this process, the friction between the rotational tool and the base metal produces sufficient thermal energy to weld the materials together [12]. The joint is formed through intermolecular diffusion occurring between the metals [13,14]. Friction welding and friction brazing are used to join similar materials but are now being applied to join dissimilar and incompatible materials such as stainless steel to titanium, aluminum to titanium, and ceramic to materials [15]. Various researchers reported evaluating the friction joints' mechanical properties and microstructure characteristics [16]. Friction process joining, especially friction brazing, can join stainless steel to Titanium [17,18], but the strength of these joints is insufficient for many industrial applications. Joints produced using friction suffer from the formation of brittle FeTi and CrTi intermetallic compounds (IMCs) that negatively affects joint strength [19][20][21]. Some research has been done to investigate the effects of friction joining parameters on the joining of stainless steel to titanium joints, even with IMCs phases, to increase the strength of the joints [21]. Interlayer materials such as powder or foil placed between base metals can avoid the formation of IMCs in the joint, resulting in stronger joints [22,23]. This paper is the first to study the use of friction spot brazing (FSB) to join stainless steel to Titanium (grade 1) using Aluminum foil as an interlayer. We evaluate the joint characteristics when joining these metals at Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. different rotational speeds. We measured the formation of any reaction products during the joining process and evaluated the mechanical properties of the joints. We present optimized process parameters for this type of joint. Experimental setup This study characterizes joining a 0.5 mm stainless steel (St37) plate to a 0.5 mm titanium (grade 1) plate using friction spot brazing. Table 1 the chemical composition of the Titanium (grade 1) and stainless steel (St37) used in this experiment, and the Aluminum 6061 foil interlayer material used in between the joint. After cutting the metal into 10 × 4 cm size plates, we used an ultrasonic bath containing ethanol and acetone to remove any surface contaminants. We placed a 4 × 4 cm aluminum 0.2 mm thick foil between the base metals. Figure 1 shows the physical layout of the metal plates and the FSB joint. We created the FSB joint using a cylindrical pin-less tool made out of tungsten carbide (WC) with a 20 mm diameter shoulder as shown in figure 2. With a dwell time of 30 seconds and an upsetting of 0.9 mm, we varied the rotational speed of the tool to 1800, 2000, 2200, 2400, 2600, and 2800 RPM. We examined joints of each sample using several tools. Cross-sections of the FSB sample were examined with an optical microscope (OM), a scanning electron microscope (SEM), A microhardness test to determine whether the rotational speed of the sample would influence its interfacial features and the Tensile lap shear test. Tensile lap shear test measures the strength of FSB joints. The speed of the tensile strength test was 0.2 mm min −1 . It has been achieved that the fracture load and displacement were measured as a result of the tensile lap shear test. Results and discussion 3.1. Appearance Figure 3 shows the appearance joint of FSB samples with five different rotational speeds. Steel plates were joined to titanium (grade 1) plate using an aluminum foil filler. These samples are easily joined in air without oxidation and use low energy, compared with other joining processes such as roll bonding, MiG/TIG, fusion welding, laser welding, and vacuum brazing. The steel plate has shown a completely uniform heat affected zone (HAZ) related to the constant heat transfer of pin-less tool to welding nugget. Despite of friction spot stir welding (FSSW), the joint surface is almost smooth, and no keyhole observes because of using the pin-less tool and no material distortion. Figure 4 shows the cross-section of the samples and their interfacial features for the FSB joints while varying the rotational speed of the tool. Each sample was cut in the transverse direction at the center of the FSB joint. Each figure is created by stitching multiple photos taken with an optical microscope at 50× magnification that spans the entire 40 mm width. The vertical strip pattern is a result of the microscope's vignetting. Optical inspection of interface The 1600 RPM sample is not smooth and does not have a uniform interface surface. The tensile strength of the joint is less than 2 kN, because the low friction does not generate enough heat to fuse the metals. The figure shows a gap between the interface metals. Increasing the rotational speed of the tool from 1600 to 2000 RPM increases the joint strength, melting and defusing the aluminum foil to base metals. This produces a more uniform interface between base metals. At 2000 RPM the tensile strength of the joint is 6 kN. Increasing the tools rotational speed above 2000 RPM reduces the joint strength and increases the thickness of the interlayer, which is brittle and suffers from more cracks. The cracks are a result of the different heat transfer coefficients between the metals and the different solidification coefficient of molten aluminum. Increasing the rotational speed of tool from 2600 RPM to 2800 RPM and the higher pressure of rotational tool results in plastic deformation of the stainless steel. The higher pressure of the FSB tool shoulder on the sample breaks the surface iron oxide film and creates a direct contact between the iron and molten aluminum, which increases the diffusion rate of aluminum to base metals. Figure 5 shows cross sections of the 1600 RPM and 1800 RPM FSB samples taken with a SEM. The low heat transfer results in a weak joint. Increasing the rotational speed to 1800 RPM, results in generating more heat that partially melts the aluminum foil. In both cases no intermetallic phases are detected. Interface microstructure Increasing the rotational speed to 2000 RPM generates sufficient heat to melt the metals at the joint zone. The main three metals have a melting points of 1536°C for Fe, 2370°C for Ti, and 660°C for Al. Melting these three metals results in intermetallic phases that considerably increase the joint's strength. Figure 6 shows diffusion of aluminum to stainless steel is higher than the diffusion of aluminum to titanium, which results in a higher concentration of intermetallic phases on stainless steel side than on the titanium side. FeAl 3 and Fe 3 Al form on the stainless-steel side and TiAl 3 forms on the titanium side. Figure 7 shows that the higher heat and pressure generated when using a rotational speed of 2200 RPM causes transverse cracks in the interface. These cracks reduce the joint's strength. Figures 7(b)-(c) shows that a eutectoid structure and FeAl phase at the interface has been formed. Eutectoid structure reduces the concentration of intermetallic phases. Figure 8(a) shows that increasing the rotational speed increases the diffusion of aluminum to stainless steel, but reduces the tensile strength because of cracks in the interlayer. The higher heat forms additional intermetallic phases, i.e., TiAl 2 , FeAl 3 , and Ti 3 Al, near the base metals [22,23]. The differences in the volumetric shrinkage coefficient of the base metals leads to transverse cracks in the interface, which weaken the joint. Figure 9 shows the wavy shape of the stainless steel and aluminum interface formed by increasing the heat input to the joint. The diffusion rate of aluminum to stainless steel is more than the titanium side. Figure 9(b) shows the Ti 3 Al intermetallic phase formed in the interface of aluminum foil and a titanium plate. FeAl 2 and FeAl intermetallic phases are formed in the interface between the steel and aluminum. Figure 9(c) shows the interface layer is full of cracks that reduces the tensile strength of joint. Figure 10(a) shows higher rotational speed and higher heat input form a eutectoid structure in the interface of the sample. Figure 10(b) shows the FeTi and TiAl intermetallic phase is formed because of higher heat and increased diffusion. Figure 10(d) shows in the titanium side, the TiAl 3 phase is formed. Figure 10(g) shows the 2800 RPM sample that suffers from transverse cracks because of the higher pressure of tool and the differences in the volumetric shrinkage coefficients. This reduces tensile strength joint of 2800 RPM FSB samples. Friction spot brazing at 2000 RPM results in the the highest tensile strength (6 kN), and the joint strength is reduces as we increased the tools rotational speed Figure 11 shows the results of the microhardness test for the stainless steel and titanium interface layer. Each data point is the average of 7 hardness tests done on a sample. Microhardness test Increasing the rotational speed from 1600 RPM to 2000 RPM activates hardening mechanism because the pressure of the tool increases the microhardness of the samples. Increasing the rotational speed to 2200 RPM reduces the microhardness because the higher heat input overcomes the work hardening mechanism. The crosssection surface of the 2400 RPM FSB sample has the highest microhardness because the high pressure on the joint zone results in high plastic deformation. The intermetallic FeAl 3 [24] and TiAl 2 [25] phases overcome the work softening mechanism [26]. At higher speeds, between 2600 RPM to 2800 RPM, the higher heat results in work softening that lowers the microhardness further. As the rotational speed increases, the friction rate between the tool and the stainless steel surface increases. As a result, the heat input increases. Then, due to the activation of the work-softening mechanism, plastic deformation occurs. The plastic deformation associated with the locking of dislocations thus increases the hardness and strength of the joint. Figure 12 shows the sketch of the tensile lap shear strength of samples and figure 13 shows the results of an EDS line measured across the diameter of the 20 mm FSB joint for each sample. The diffusion rate of base metals and aluminum foil filler increases as the rotational speed increases. We measured the tensile strength using the ASTM E8 standard with a speed of 0.2 mm min −1 . We chose the lowest speed because the formed intermetallic layer between base metals was brittle. Tensile lap shear strength Increasing the rotational speed from 2000 RPM to 2800 RPM causes the diffusion rate of iron into the interlayer to increase. The Iron diffuses more than titanium because of its lower melting points. The diffusion rate of aluminum into the base metals increases with the increasing rotational speed because of the higher heat input. The diffusion rate of titanium into the interlayer increases substantially for 2800 RPM, compared to any of the lower samples, because the higher heat input substantially increased the diffusion coefficient. The equal amount of stainless steel and aluminum were diffused into the titanium base metal to form a solid solution. Figures 14 and 15 shows the load-displacement curves obtained from a tensile strength test of base metals and joints across different rotational speeds. Figure 14 shows the tensile strength of the individual base metals. Stainless steel has a tensile strength of ∼ 9.1 kN while titanium has a maximum tensile strength of ∼5.9 kN. Figure 15 show that increasing the rotational speed from 1600 RPM to 2000 RPM results in increasing the tensile strength of joints. The highest strength joint is the 2000 RPM FSB sample with a tensile strength 6 kN. The strength of 2000 RPM sample and titanium base metal are approximately the same because of the higher diffusion of aluminum into base metals and the lack of cracks in the interlayer. Figure 15 shows that increasing the heat input and rotational speed reduces the tensile strength of joints. Higher rotational speed increases the intermetallic layer thickness and produces more cracks. The higher pressure of the tool on the friction zone pushes out excess aluminum and results in a higher tensile strength for the 2400 RPM and 2800 RPM samples over the 2200 RPM and 2600 RPM samples. Also, the 2200 RPM and 2600 RPM samples have more cracks in the interlayer. Errors in the process may have applied higher pressure to joint zone, which may explain the reduced tensile strength of 2600 RPM sample. We excluded any higher rotational speeds that deform and/or damage the stainless steel. Figure 16 depicts the stress-strain curve of FSB samples with varying rotational speeds. According to figure 16, the 2200 RPM sample has the highest degree of toughness. The highest toughness is because of the higher diffusion of the aluminum layer to base metals. Also, the thickness of the intermetallic increases the strength of the 2200 RPM FSB joint. In addition, increasing the rotational speed of the tool and increasing the friction between the tool and stainless steel will result in more aluminum melting between the two metals. By increasing the rotational speed and friction between stainless steel and the tool, the diffusion of aluminum foil to base metals is accelerated. This increases the strength and rigidity of the FSB joints. Figure 17 shows the thickness of the intermetallic layer between base metals increases with the rotational speed. The diffusion of aluminum to stainless steel is higher than diffusion of aluminum to titanium, because of the different melting points of base metals. The higher rotational speed and higher heat input increase the HAZ in the joint. Increasing the rotational speed increase the friction between base metals and the tool, which in turn increases the HAZ. Figure 17 shows that increasing the rotational speed and friction results in melting the aluminum foil and improving the distribution of aluminum between the base metals. The ductile fracture occurs because of the low thickness of the intermallic layer. We can avoid the ductile fracture by increasing the rotational speed to 2800 RPM, which increases the intermetallic layer thickness. Figure 17 shows that increasing the rotational speed generates higher heat input that melts the aluminum foil. As a result, aluminum is softer than base metals increasing the intermetallic thickness. Figure 17 shows the FSB fracture patterns indicating that in each case a ductile fracture occurred. Steel and titanium fracture surfaces have cup and cone structures between 2200-2600 RPM. The increase in heat input results in a deepening the cup and cone as the RPM increases. Conclusion This paper successfully achieved the dissimilar joining of St37 to Ti (grade 1) via the FSB process. Friction spot brazing is one of the easiest and least expensive method of joining different metals. FSB doesn't produce fumes, and it is environmentally friendly. The main results are as follows: 1. Without any keyhole, brazed joints are fabricated. The highest joint strength of 2000 RPM sample is 6 kN and it is almost equal to Ti base metal. 2. The increasing of rotational speed, the TiAl 2 -FeAl 3 -Fe 3 Al intermetallic phases are formed, and cracking is observed in these intermetallic compounds. 3. EDS analysis results show the highest diffusion of Al as a filler is achieved in the 2800 RPM FSB sample. Overall, higher rotational speed cause higher heat input and higher diffusion of filler to base metals. 4. The highest hardness was almost 850 HV, which relates to the 2400 RPM FSBed sample due to more intermetallic phases. 5. Increasing the rotational speed form more intermetallic phases and microcracking.
2022-05-21T15:05:35.579Z
2022-05-19T00:00:00.000
{ "year": 2022, "sha1": "29d427b536628153977afb160d42776c25f45bce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ac719e", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "eacea28ae2687697b9759af707f6841b9e095a62", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
252787815
pes2o/s2orc
v3-fos-license
CONTENT BASED IMAGE RETRIEVAL METHOD WITH DISCRETE COSINE FEATURE EXTRACTION IN NATURAL IMAGES Data or information at this time is not only presented in written form, but also in the form of images that require greater storage. Most of the images in the digital world use the JPEG format, where the Discrete Cosine Transform is the heart of the JPEG format, the use of DCT coefficients for indexing and image retrieval causes the retrieving process to be slower because more coefficients are processed compared to the DC coefficient method, which is only 1 /64 (1 DC coefficient) of the DCT coefficient. In this research, we perform Content Based Image Retrieval with DC feature extraction of 15,000 natural images, then calculate the distance between the images using the Manhattan Distance method. The final result of calculating precision and recall shows a value of 0.6624 and a time of less than 2 seconds, with a maximum value of 1.876 seconds. INTRODUCTION Currently data or information is not only presented in text form, but can also be presented in other forms, for example in the form of images. Nonmoving image data and moving image data require much larger storage compared to data in the form of text. Most of the images currently circulating in the digital world and on the storage media use the JPEG format. One of the advantages of the JPEG format is that the image size of the JPEG format is smaller than other image formats. More than 95% of images on the web are compressed in JPEG format where the Discrete Cosine Transform (DCT) is the heart of JPEG format images and DCT is one of the feature extraction and directly still a promising method for processing and retrieval of compressed images [1]. In JPEG format images, the image consists of a matrix of 8 x 8 pixel blocks, based on this block the JPEG image is indexed so as to create a flat plane. Meanwhile, in each 8 x 8 block, each block consists of 64 pixels where each pixel has a value or coefficient. So that in each block consists of 64 coefficients, where the first coefficient, which is located on the upper left in the block is called the discrete cosine (DC) coefficient and the remaining 63 coefficients are called AC coefficients [2]. Previous research used the Content-Based Image Retrieval (CBIR) method to search for the same image using an RGB color histogram comparison. Namely, a color histogram is created by taking the RGB color from the image and then converting it to an HSV value. After conversion, quantize up to 112 colors and calculate the contribution of each color. After generating the color contribution, each color is divided by the total number of pixels. The way to compare the color histograms of digital images is by subtracting each color histogram by the query image and the database image. The lowest difference value is the best result. The same image application uses the Content-Based Image Retrieval (CBIR) method, namely by reviewing all existing calculations and applying all these calculations in programming using Visual Studio 2008, [13] in our study using the python3 application. Research conducted in recent years used all DCT coefficients for indexing and image capture. This method causes the retrieving process to be slower because more coefficients are processed compared to the DC coefficient method, which is only 1/64 (1 DC coefficient) of the DCT coefficient [3]. By using the DC coefficient (1 DC coefficient and 63 AC coefficient) in comparing (matching) the query image of 100 images with the images in the database of 500 images. The results of the study stated that the results of the JPEG image format were much smaller and did not reduce the information displayed. Its effectiveness is also still quite high (around 0.65) [4]. Based on this description, this study will use the Content Based Image Retrieval method on 15,000 natural images with a resolution of 256x256 pixels using only the DC coefficients in the DCT coefficients which will then be calculated for their similarity with Manhattan Distance. Image Picture or image is a matrix where the row and column indexes represent a point in the image and the matrix elements (which are referred to as image elements / pixels) represent the gray level at that point [5] Image compression is an image processing process that involves many methods. This process has the characteristics of input data and output information in the form of images. Image compression is divided into two, namely lossy compression and lossless compression. Loss of compression is usually used for images that require high accuracy, while lossless compression is usually used for images that do not require high accuracies such as landscape photos, or images used for medical purposes Content Based Image Retrieval (CBIR) Content based image retrieval (CBIR), is a process to get a number of images based on the input of one image. The term was first proposed by Kato in 1992 [6]. Image retrieval or Image querying is an image processing application that can help users retrieve or search quickly for an image in an image database based on user queries or requests [7]. The initial stage in the image retrieval system based on content is to perform the extraction and description process on the image in the database so as to produce a feature vector. After that, the extraction and description process is carried out on the query image entered by the user. Then, Similarity Comparison is carried out between the query image and the image in the database. The similarity distance between the query image and the image in the database will be sorted and displayed as output [8] CBIR aims to capture images in databases that have visually similar content or features. For this HID as a feature extraction algorithm was tested and validated. The HID feature extraction algorithm is combined with the DCT feature to increase the precision of the CBIR system. Manhattan and Euclidean are considered as distance metrics for the feature matching process. It was found that the system performed best when Manhattan distance was used as the metric distance. [12] In another study, extensive experiments showed that the proposed technique achieves competitive performance compared to existing DCT-based methods. The proposed method has a significant advantage over the pixel domain method by requiring only partial decompression. The proposed content descriptor is also suitable for real-time implementation and application. [11] Calculation of the distance between two images Distance is a commonly used approach to achieve image search. Its function is to determine the similarity or dissimilarity of two feature vectors. The level of similarity is expressed by a score or ranking. The smaller the ranking value, the closer the similarities between the two vectors [9], one of the methods to measure the distance between two images is the Manhattan Distance. Manhattan Distance Manhattan distance is a formula for calculating the shortest distance between two points. Manhattan Distance calculation to find the minimum distance from two points ( 1 , 1 ) and which can be done by calculating the value of | 2 − 1 | + | 2 − 1 | [10]. The formulation of Manhattan Distance can be described as follows: Where: n : Data Dimension |…| : Absolute Value i : Testing Data j : Training Data Precision and Recall Recall is a comparison of the number of relevant documents retrieved according to the given query with the total collection of documents relevant to the query. Precision is the comparison of the number of documents relevant to the query with the number of documents retrieved from the search results. Precision can be interpreted as the accuracy or match (between the request for information and the answer to the request) [11] Precision and Recall According to (Kurniawan 2010) Recall is a comparison of the number of relevant documents retrieved according to the given query with the total collection of documents relevant to the query. While Precision can be interpreted as the exactness or match (between the request for information with the answer to the request). If someone is looking for information on a system, and the system offers some documents, then this Hal. 171-178 p-ISSN : 2339-1103 e-ISSN : 2579-4221 exactness is actually also of relevance. That is, how precise or suitable the document is for the needs of the information seeker, depending on how relevant the document is to the seeker. [1] III. RESEARCH METHODS The precision of image retrieval is dependent on the (1) feature extraction process, (2) feature similarity method. Some of the CBIR algorithms uses shape features extracted from the shape of an object and objects are classified with higher accuracy compared conventional features like texture and color. The color content of an image plays very important role in content-based image retrieval. Global histogram is used to represent popularly for color contents. Three RGB color channels are used to represent global color histogram. These three individual color histogram provides similarity between different images as it is scale and rotation invariant features. In this paper hybrid features which combines three types of feature descriptors, including spatial, frequency, CEDD and BSIF features are used to develop efficient CBIR algorithm. Individual analysis of descriptors is also studied and results are presented. The rest of the paper is organized as follows. Section 2 briefly reviews important algorithms of CBIR techniques. Various spatial, frequency and hybrid domain feature extraction methods are explained in Section 3. Section 4 presents simulation results and discussions. Finally, Section 5 concludes the paper. This study uses an experimental model. Experimental model is a research model that is testing, manipulating, and influencing things related to all variables or attributes of this research. The stages of the research are as follows: The hardware and software used are as follows: Data Collection The dataset used is in the form of natural images or photos of animals (birds, dogs, cats), plants (fruits) as many as 15,000 photos downloaded through the website address www.robots.ox.ac.uk/~vgg/data/ and http: //chaladze.com/l5/ with JPEG image format. Data Preprocessing Color is considered as one of the important lowlevel visual features as the human eye can differentiate between visuals on the basis of color. The images of the real-world object that are taken within the range of human visual spectrum can be distinguished on the basis of differences in color [24][25][26][27]. The color feature is steady and hardly gets affected by the image translation, scale, and rotation [28][29][30][31]. Through the use of dominant color descriptor (DCD) [24], the overall color information of the image can be replaced by a small amount of representing colors. DCD is taken as one of the MPEG-7 color descriptors and uses an effective, compact, and intuitive format to narrate the indicative color distribution and feature. Shao et al. [24] presented a novel approach for CBIR that is based on MPEG-7 descriptor. Eight dominant colors from each image are selected, features are measured by the histogram intersection algorithm, and similarity computation complexity is simplified by this. According to Duanmu [25], classical techniques can retrieve images by using their labels and annotation which cannot meet the requirements of the customers; therefore, the researchers focused on another way of retrieving the images that is retrieving images based on their content. The proposed method uses a small image descriptor that is changeable according to the context of the image by a two-stage clustering technique. COIL-100 image library is used for the experiments. Results obtained from the experiments proved that the proposed method to be efficient [25]. Wang et al. [26] proposed a method based on color for retrieving image on the basis of image content, which is established from the consolidation of color and texture features. This provides an effective and flexible estimation of how early human can process visual content [26]. The fusion of color and texture features offers a vigorous feature set for color image retrieval approaches. Results obtained from the experiments reveal that the proposed method retrieved images more accurately than the other traditional methods. However, the feature dimensions are not higher than other approaches and require a high computational cost. A pairwise comparison for both low-level features is used to calculate similarity measure which could be a bottleneck [26]. Various research groups carried out a study on the completeness property of invariant descriptors [27]. Zernike and pseudo-Zernike polynomials which are orthogonal basis moment functions can represent the image by a set of mutually independent descriptors, and these moment functions hold orthogonality and rotation invariance [27]. PZMs proved to be more vigorous to image noise over the Zernike moments. Zhang et al. [27] presented a new approach to derive a complete set of pseudo-Zernike moment invariants. The link between pseudo-Zernike moments of the original image and the same shape but distinct orientation and scale images is formed first. An absolute set of scale and rotation invariants is obtained from this relationship. And this proposed Hal. 171-178 p-ISSN : 2339-1103 e-ISSN : 2579-4221 technique proved to be better in performance in recognizing pattern over other techniques [27]. Guo et al. [28] proposed a new approach for indexing images based on the features extracted from the error diffusion block truncation coding (EDBTC). To originate image feature descriptor, two color quantizers and a bitmap image using vector quantization (VQ) are processed which are produced by EDBTC. For assessing the resemblance between the query image and the image in the database, two features Color Histogram Feature (CHF) and Bit Pattern Histogram Feature (BHF) are introduced. The CHF and BHF are calculated from the VQ-indexed color quantizer and VQ-indexed bitmap image, respectively. The distance evaluated from CHF and BHF can be used to assess the likeliness between the two images. Results obtained from the experiments show that the proposed scheme performs better than former BTC-based image indexing and other existing image retrieval schemes. The EDBTC has good ability for image compression as well as indexing images for CBIR [28]. Liu et al. [29] proposed a novel method for region-based image learning which utilizes a decision tree named DT-ST. Image segmentation and machine learning techniques are the base of this proposed technique. DT-ST controls the feature discretization problem which frequently occurs in contemporary decision tree learning algorithms by constructing semantic templates from low-level features for annotating the regions of an image. It presents a hybrid tree which is good for handling the noise and tree fragmentation problems and reduced the chances of misclassification. In semantic-based image retrieval, the user can query image through both labels and regions of images. Results obtained from the experiments conducted to check the effectiveness of the proposed technique reveal that this technique provides higher retrieval accuracy than the traditional CBIR techniques and the semantic gap between lowand high-level features is reduced to a significant level. The proposed technique performs well than the two effectively set decision tree induction algorithms ID3 and C4.5 in image semantic learning [29]. Islam et al. [30] presented a supreme color-based vector quantization algorithm that can automatically categorize the image components. The new algorithm efficiently holds the variable feature vector like the dominant color descriptors than the traditional vector quantization algorithm. This algorithm is accompanied by the novel splitting and stopping criterion. The number of clusters can be learned, and unnecessary overfragmentation of region clusters can be avoided by the algorithm through these criteria. Jiexian et al. [31] presented a multiscale distance coherence vector (MDCV) for CBIR. The purpose behind this is that different shapes may have the same descriptor and distance coherence vector algorithm may not completely eliminate the noise. The proposed technique first uses the Gaussian function to develop the image contour curve. The proposed technique is invariant to different operations like translation, rotation, and scaling transformation. Before going through the steps in the DC coefficient extraction process, data preprocessing is carried out so that the image is easy to compute with an image size of 256x256 pixels. The sample image used in this study can be seen in Figure 2. DC Coefficient Feature Extraction The equation or algorithm used for DC extraction can be written as follows: Where H is the indexing key and Distance Measurement Method The distance measurement method used in this study is the Manhattan Distance method. This method Hal. 171-178 p-ISSN : 2339-1103 e-ISSN : 2579-4221 is used to determine the search for an image whether the image is the same or not between two images by determining the distance from the two images to be tested. For the level of similarity can be expressed by a value. The smaller the resulting value, the closer the similarity between the two images. Effectiveness of Image Search (image retrieval) In this research, 25 query images will be retrieved from each database. For each query as many as 20 images are called (displayed) then the precision and recall will be calculated. DC Coefficient Extraction Results After the extraction process is carried out, the results obtained are a new image which previously contained DCT coefficients (1 DC coefficient and 63 AC coefficients) to contain only DC coefficients, the results of the image extraction can be seen in Table 1 Table 1. DC Coefficient Feature Extraction Results Table No Original Image Image Extraction DC 1 2 3 4 5 Content Based Image Retrieval Result The image results that have been extracted DC coefficients are used for the Content Based Image Retrieval process using the Manhattan Distance method in Figure 4. Manhattan Distance Precision and Recall Results The results of precision and recall using the Manhattan Distance method are shown in Figure 5. In the graph above is a graph of the image processing time following the CBIR test using the appropriate Manhattan distance calculation Discussion The Manhattan Distance method shows interesting results where the best precision value is with a value of 1 for the fruit image class and the image of a cat and the worst precision value with a value of 0.24 also for the dog image class. Meanwhile, the average value of precision results with the Manhattan Distance method shows a value of 0.66. The results of Content Based Image Retrieval also become faster because it only uses DC coefficients instead of using all DCT coefficients, making the indexing process and image retrieval much faster, less than 2 seconds with a maximum value of 1.876 seconds as shown in Figure 6 V. CONCLUSION Based on the results obtained, it can be concluded that by extracting DC features in the image can reduce image storage and then using the Content Based Image Retrieval method can increase effectiveness with precision and recall results using a Manhattan Distance of 0.6624. The use of an extracted image with a DC coefficient feature can increase the effectiveness of Content-Based Image Retrieval. By using the Euclidean Distance method, the precision and recall results are higher than the Manhattan Distance method. The results of this study only use digital images so that they can be developed with other types of images such as artificial images. The dataset used in this study only uses natural images. For this reason, future research is expected to be able to use artificial images. Then for the method of measuring the distance between two images, other methods other than Manhattan Distance can be used in order to give better results.
2022-10-11T17:18:05.035Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "3b1385b008f8339fba83e551b4fa0ea9094dc063", "oa_license": "CCBYSA", "oa_url": "https://ojs.stmikpringsewu.ac.id/index.php/JurnalTam/article/download/1092/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d9ac431ed6768ad38719627b0d7c81e6740275ef", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
246465951
pes2o/s2orc
v3-fos-license
A Palette of Fluorescent Aβ42 Peptides Labelled at a Range of Surface-Exposed Sites Fluorescence-based single molecule techniques provide important tools towards understanding the molecular mechanism of complex neurodegenerative diseases. This requires efficient covalent attachment of fluorophores. Here we create a series of cysteine mutants (S8C, Y10C, S26C, V40C, and A42C) of Aβ42, involved in Alzheimer’s disease, based on exposed positions in the fibril structure and label them with the Alexa-fluorophores using maleimide chemistry. Direct stochastic optical reconstruction microscopy imaging shows that all the labelled mutants form fibrils that can be detected by virtue of Alexa fluorescence. Aggregation assays and cryo-electron micrographs establish that the careful choice of labelling position minimizes the perturbation of the aggregation process and fibril structure. Peptides labelled at the N-terminal region, S8C and Y10C, form fibrils independently and with wild-type. Peptides labelled at the fibril core surface, S26C, V40C and A42C, form fibrils only in mixture with wild-type peptide. This can be understood on the basis of a recent fibril model, in which S26, V40 and A42 are surface exposed in two out of four monomers per fibril plane. We provide a palette of fluorescently labelled Aβ42 peptides that can be used to gain understanding of the complex mechanisms of Aβ42 self-assembly and help to develop a more targeted approach to cure the disease. Introduction Alzheimer's disease is a devastating neurodegenerative disease affecting a large and growing number of individuals world-wide. Understanding the underlying molecular mechanisms is crucial to developing a targeted approach in the search for a cure. Fluorescence based super resolution microscopy methods such as direct stochastic optical reconstruction microscopy (dSTORM) or stimulated emission depletion (STED) can provide valuable structural information [1][2][3][4]. The efficient incorporation of fluorophores as molecular probes in the proteins of interest can open up avenues of additional optical methods such as for example the investigation of molecular size distributions such as fluorescence correlation spectroscopy (FCS) [5,6], cell biology using flow cytometry and fluorescence microscopy [7], Western blotting and protein interaction screening using protein arrays combined with fluorescence scanners and microarray readers [8], as well as determination of affinities and exchange rates using Förster resonance energy transfer (FRET) [9,10]. Of particular interest for neuronal function and dysfunction, fluorescence microscopy studies with fluorophore-labelled recombinant protein have provided key insights into the self assembly and membrane interaction of α-synuclein, the amyloid protein involved in Parkinson's disease [11][12][13]. Super-resolution fluorescence microscopy studies of Aβ42, the amyloid peptide involved in the pathology of Alzheimer's disease, have provided valuable information on the in vitro fibril elongation [14] as well as cellular uptake [15] and aggregation inside cells [16]. These studies have mainly employed synthetic fluorescent Aβ42 peptide due to challenges in successfully covalently attaching fluorophores to the peptide. Previous attempts to covalently label recombinant Aβ42 with Alexa fluorophores at the N-terminus have yielded low labeling efficiency and significantly retarded the aggregation process, which was remedied by using a low ratio of labelled to unlabelled peptide [17]. The same approach permitted the identification of glycogen synthase 3α and β (also named tau kinase 1) as interaction partners of on-pathway Aβ42 oligomers [18]. In the present study we optimize the position of covalent labelling of recombinant Aβ42 with fluorophores incorporated at different surface-exposed positions of the peptide in Aβ42 fibrils, as guided by recent high-resolution structures [19][20][21]. To enable covalent labelling with thiol reactive fluorophores, we thus create five cysteine mutants of Aβ42: S8C and Y10C for labelling of the unstructured N-terminal region, S26C, V40C and A42C on the surface of the fibril core ( Figure 1). As a proof-of-concept, each mutant is labelled with Alexa488 and Alexa647, and the labelling efficiency is evaluated using size exclusion chromatography (SEC) and SDS PAGE. Aggregation assays of the labelled peptides, alone and in mixtures with unlabelled wild-type peptide, using thioflavin T (ThT) as well as Alexa probe fluorescence, were performed to study whether the perturbation of the fibril formation rate depends on the position of the label. Additionally, cryogenic transmission electron microscopy (cryoTEM) was used to determine the morphology of the labelled peptide fibrils compared to unlabelled WT Aβ42 fibrils. [19] and Small Angle X-ray Scattering (SAXS) [21]. The positions where cysteine is introduced through site directed mutagenesis are shown-Ser8 (blue), Tyr10 (green), Ser26 (yellow), Val40 (orange), Ala42 (red). These are the positions used for covalent attachment of Alexa fluorophores. (B) Molecular structures of Alexa488 and Alexa647. Expression and Purification of Peptides The plasmid carrying synthetic genes with Escehrichia coli (E. coli) optimized codons for Aβ42 wild-type (PetSac, cloned by us [22]) as well as S8C, Y10C, S26C, V40C, and A42C (Pet3a, purchased from Genscript) were transformed into Ca 2+ competent cells of E. coli strain BL21 DE3 pLysS star and the protein was expressed in auto-induction medium [23]. The peptides were purified using ion exchange chromatography (IEX) as described with the minor change that lower salt concentration (50 mM NaCl, (Duchefa Biochemie, CAS no. 7647-14-5)) was used to elute the peptides, and size exclusion chromatography (SEC) on a 26 × 600 mm Superdex 75 column was used instead of spin filters for base on molecular size. The ion exchange and SEC buffers contained 1 mM dithiothreitol (DTT, PanReac Applichem, CAS no. 3483-12-3) to avoid dimerization of cysteine mutants. The final SEC was performed in buffer without DTT in order to isolate monomer and remove DTT from the sample prior to adding the label. The purified monomeric peptides were lyophilized as aliquots until further use. Labelling of Purified Peptides with Alexa Fluor Lyophilized fractions were dissolved in 50 µL milliQ water yielding a peptide concentration of ∼14 µM. Alexa fluor 488 or 647 at a concentration of 3-4 mM in 20 µL dimethyl sulfoxide (DMSO) (Sigma, CAS no. 67-68-5) was added to the dissolved peptide in order to have access dye in the labelling mixture, which was kept overnight at 4 • C. The mixture was then added to 1 mL of 6 M GuHCl (Sigma, CAS no. 50-01-1), 20 mM sodium phosphate, 0.2 mM ethylenediaminetetraacetic acid (EDTA; J. T Baker, CAS no. 6381-92-6), pH 8.5, and subjected to SEC on a Superdex 75 10/300 column in 20 mM sodium phosphate (Merck, CAS no. 6381-92-6) buffer pH 8.0, with 0.2 mM EDTA. The absorbance at 280 nm as well as 488 nm or 647 nm was monitored using a Quadtech detector to follow the elution of the labelled peptide, excess dye as well as any unlabelled peptide, if present. The aliquots collected from the SEC were analysed by SDS PAGE, and stored at −80 • C until further use. The same peptide concentration and conditions were used in the labelling of all mutants. Note: In case of the Y10C mutant, the absorbance at 214 nm was monitored instead of the absorbance at 280 nm. Since the Aβ42 sequence has only one Tyr residue contributing to A 280 , the Y10C mutant gives no absorbance at 280 nm wavelength. Thus, we monitor the absorbance at 214 nm, which is contributed to by all peptide bonds. Alexa488 and Alexa647 were chosen for labelling to test labelling with chemically different probes with widely different excitation wavelengths. The peptides S8C, Y10C, and V40C were labelled with both Alexa488 and Alexa647 to enable future experiments requiring peptides labelled with two different fluorophores. The concentration of the labelled peptides was determined using the correction factor for Alexa488 and Alexa647 in the following formula: where, A 488/647 is the absorbance at 488 or 647 nm, c.f. is a correction factor, which is 0.11 for Alexa488, and 0.03 for Alexa647, and 280 is the extinction coefficient for Aβ42 which is 1400 L mol −1 cm −1 . Preparation of Samples for Kinetic Experiments The lyophilized aliquots of the purified WT Aβ42 were dissolved in 1 mL of 6 M GuHCl, 20 mM sodium phosphate, 0.2 mM EDTA, pH 8.5, and subjected to SEC on a Superdex 75 10/300 column in 20 mM sodium phosphate buffer pH 8.0, with 0.2 mM EDTA. The middle part of monomer peak was collected in a low-binding tube (Axygen, MCT-150-L-C) on ice, and was typically found to have a concentration in the range 20-80 µM (determined by absorbance of the collected fraction using 280 = 1400 L mol −1 cm −1 ). Frozen aliquots of the Alexa-labelled peptides were kept on ice for thawing. Aggregation Kinetics by Thioflavin T Fluorescence Aggregation kinetics experiments were performed for samples with different ratios of Alexa-labelled peptide to WT Aβ42, as follows, 1:1.5, 1:3.5, 1:7, 1:15, 1:23, 1:31 and 1:38 in 20 mM sodium phosphate and 0.2 mM EDTA, pH 8.0 buffer. The total monomer concentration was close to 5 µM. The samples were pipetted as multiple replicates into a 96-well plate (Corning 3881), 100 µL per well. The experiments were initiated by placing the 96-well plate at 37 • C in a plate reader (Fluostar Omega). The ThT fluorescence was measured through the bottom of the plate every 120 s using an excitation filter at 440 nm and an emission filter at 480 nm. Cryo-TEM For all peptides, fibrils were prepared from samples with monomer close to 5 µM total concentration, which were incubated at 37 • C in PEGylated plates (Corning 3881) in a plate reader and collected after reaching the plateau in ThT fluorescence. Specimens for cryo-TEM were prepared in an automatic plunge freezer system (Leica EM GP). The climate chamber temperature was kept at 21 • C, and relative humidity was ≥90% to minimise loss of solution during sample preparation. The specimens were prepared by placing 4 µL solution on glow discharged lacey formvar carbon coated copper grids (Ted Pella) and blotted with filter paper before being plunged into liquid ethane at −183 • C. This leads to vitrified specimens, avoiding component segmentation and rearrangement, and the formation of water crystals, thereby preserving original microstructures. The vitrified specimens were stored under liquid nitrogen until measured. A Fischione Model 2550 cryo transfer tomography holder was used to transfer the specimen into the electron microscope, JEM 2200FS, equipped with an in-column energy filter (Omega filter), which allows zeroloss imaging. The acceleration voltage was 200 kV and zero-loss images were recorded digitally with a TVIPS F416 camera using SerialEM under low dose conditions with a 10 eV energy selecting slit in place. dSTORM imaging was performed on the ELYRA P1 imaging system (Zeiss, Germany) which included an inverted microscope with 100X oil immerse objective lens (1.46 NA). The samples were mounted in the ZEISS level adjustable insert holder and placed on the PIEZO stage with Auto-focus adjusted. The fluorescence dyes were excited by the selected three laser lines, 488 nm, 543 nm and 633 nm respectively. Accordingly, the filter sets #4 for collection of the emission light were chosen dependent on the fluorescent dyes. For Alexa 488, the 488 nm laser line was used for excitation and the emission light filter was BP 495-550; for Alexa 546, a 543 nm laser line was used for excitation and the emission light filter was BP 570-620; for Alexa 647, a 633 nm laser line was used for excitation and the emission light filter was LP 655. The images were acquired onto a 256 × 256 pixel frame of an electron multiplying charge coupled device (EMCCD) camera (iXon DU897, Andor). dSTORM To generate the dSTORM images, the PALM processing function in the ZEN software was applied. First, the overlapping signals were discarded by using a multi-emitter model for the whole image sequences. Second, to distinguish the real signal peak, the mask size was set to 7 pixels and the ratio of signal/noise was set to 7. After the filtration, the lateral and axial drift during acquisition was corrected after reconstruction of dSTORM images by using the Drift function that was tested by a fluorescent beads as fiducial marker prior to the analysis. After drift correction, the images were proceeded by grouping function. Finally, the present dSTORM images were corrected in according to the distributions of followed parameters: photon number, precision size and first frame. To remove the unspecific background, the molecules were filtered out if the number of molecules was less than 10 in an area of 100 nm perimeter. Expression and Purification of Peptides Sequence homogeneity and purity of the starting material is crucial for reproducible aggregation kinetics of peptides and its analysis. We thus expressed recombinant WT human Aβ42 as is, that is, without any tags except Met0, which is required to initiate translation, and purified from inclusion bodies using ion exchange and size exclusion steps, as described before. This mode of expression of Aβ(M1-42) relies on the peptide having low enough solubility to form inclusion bodies, which avoids the degradation in E. coli, a common fate of small unstructured proteins. We attempted this mode of expression and purification for all cysteine mutants and achieved good yield. Covalent Labelling of Peptides with Alexa Fluors Peptides were labelled with Alexa fluorophores using maleimide chemistry and purified by SEC. The monitoring of the absorbance at multiple wavelengths (214 and 280 nm as well as 488 or 647 nm) indicated that all peptides are successfully labelled (Figure 2A). The eluted peptide was collected in multiple aliquots and analyzed by SDS PAGE to confirm labelling and purity. Figure 2B shows a coomassie-stained gel of eluted fractions of Alexa-488 labelled S8C. The S8C-Alexa488 monomers can be seen below the 10 kDA Mw standard in fractions 1, 2, 3, and 4. The same gel was transferred to a blue-light UV table to observe fluorescence the Alexa-488 label attached to the S8C monomers ( Figure 2C). The single band for S8C-Alexa488 monomers in the coomassie-stained gel is also a strong indicator of close to ∼100% labelling of monomers. In case of inefficient labelling, an extra band with unlabelled monomers appears below the band for labelled monomers on the SDS PAGE. One such case can be seen in Supplementary Materials Figure S1D. The same labelling conditions were used for all peptides and resulted in successful covalent attachment of the Alexa fluor to the cysteine residue at the respective position in the mutant. The results can be seen in Supplementary Materials Figure S1. Aggregation Kinetics The fibril formation of the peptides was investigated under conditions at which Aβ42 is known to aggregate rapidly. Aggregation in peptide samples with a total monomer concentration of close to 5 µM with different ratios of labelled :unlabelled peptide was followed by monitoring ThT fluorescence as a function of time at 37 • C in 20 mM sodium phosphate and 0.2 mM EDTA, pH 8.0, for all five mutants. Under these conditions, all mutant peptides form ThT-positive aggregates over time. The aggregation curves have a sigmoidal-like appearance, comprising a lag phase, an exponential phase, and a final plateau, characteristic of nucleated polymerization reactions (see Figure 3). We find that peptides labelled at S8C can be used at 1:23-1:31 ratio of S8C:WT with no change in kinetics and at 1:3.5-1:15 ratio with a small retardation. For peptides labelled at Y10C, no effects were seen for samples with 1:3.5-1:38 Y10C:WT. For peptides labelled at S26C, no effects were seen for samples with 1:15-1:38 S26C:WT, and a small retardation at ratios 1:3.5 and 1:1.5. For peptides labelled at V40C, little effects were seen for samples with 1:31-1:38 V40C:WT, but a progressively increasing retardation at ratios 1:15-1:3.5. For peptides labelled at A42C, little effects were seen for samples with 1:3.5-1:38 A42C:WT. At 1:1.5 ratio of mutant:WT, all labelled peptides caused a delay of aggregation. Two of the labelled peptides, S8C and Y10C, were found to form fibrils on their own, albeit at much lower rate than wt Aβ42. This is exemplified for S8C labelled with Alexa488 in Figure 3. Morphology of Aggregates Cryo-TEM was used to study the morphology of the end-stage fibrils for all labelled peptides. In typical WT Aβ42 aggregates, individual filaments can be observed, and two filaments are twisted around each other along a common axis, seen as nodes that appear along the fibril at regular intervals. Figure 4 shows the end-stage fibrils of the Alexa-labelled peptides formed at 1:3.5 ratio with WT Aβ42. S8C-Alexa488+WT fibrils show very similar morphology to WT Aβ42. The fibrils seem rigid with sharp twists of the filaments at regular intervals. This is the same for the fibrils of V40C-Alexa488+WT, and A42C-Alexa488+WT. For Y10C-Alexa647+WT, the fibrils appear very short, and don't appear to be as straight as the Aβ42 fibrils. The fibril morphology for S26C is also perturbed. S26C-Alexa647+WT fibrils are very long, and the node-to-node distance is longer than for typical WT Aβ42 fibrils. Fibrils of the two Alexa-labelled peptides formed in the absence of unlabelled WT Aβ42 are shown in Figure 5. S8C-Alexa488 forms short fibrils with sharp twists, i.e., nodes appear at short intervals. Y10C-Alexa647 fibrils are significantly shorter and exhibit a different morphology compared to the same peptide co-aggregated with WT Aβ42 as well as typical Aβ42 fibrils. dSTORM Imaging of Fibrils dSTORM imaging was carried out for all the labelled peptides in order to study whether the labelled peptide in incorporated in the end-stage fibrils formed in 20 mM sodium phosphate and 0.2 mM EDTA at pH 8. Figure 6 shows the dSTORM images of fibrils of the labelled peptides. The fibrils of S8C-Alexa488 and Y10C-Alexa647 which aggregate without the presence of unlabelled WT are shown, along with S26C-Alexa647, V40C-Alexa488, and A42C-Alexa488 in a 1:3.5 mix with unlabelled WT Aβ42. These dSTORM images prove that the Alexa-labelled monomers form fibrils that are detected in dSTORM microscopy by virtue of Alexa fluorescence. Discussion The covalent labelling of fluorophores to Aβ42 is challenging because both aggregation kinetics and fibril structure are sensitive to small changes in peptide composition and fluorophores typically have the size comparable to 7-10 amino acid residues and may be both hydrophobic and charged. Previous studies have used synthetic Aβ42 peptide for fluorescence based studies; however, it has been shown that sequence purity is of great importance in the mechanistic analysis of protein aggregation. Synthetic peptides may contain mismatches and racemates, and indeed synthetic Aβ42 is shown to aggregate significantly slower than recombinant peptide, and at the same time is unable to mimic the neurotoxic behavior of the recombinant peptide [24]. Strategic positioning of the fluorophore based on the fibril structure of Aβ42 may be one route to minimize the influence on the self-assembly of the peptide. Previous attempts to label Aβ42 with Alexa fluor at a cysteine residue added at the N-terminus resulted low labelling efficiency and caused great perturbation of aggregation kinetics. An labelled:unlabelled peptide ratio of 1:170 was required to achieve an aggregation profile similar to WT Aβ42 [17]. This might be due to the general low reactivity close to the Asp1 residue of Aβ42 peptide, as previously noted when attempting to cleave the Met0-Asp1 peptide bond [22]. In this study, we set out to find if there are more optimal fluorophore labelling positions in Aβ42 by covalently attaching Alexa fluorophores at different positions of the peptide based on their location in the fibril structure. Through site directed mutagenesis we introduce cysteine residues at five different positions of Aβ42 for fluorophore labelling using maleimide chemistry. We thus created the mutants S8C and Y10C at the N-terminal region of Aβ42, and mutants S26C, V40C, and A42C on the surface of the fibril core ( Figure 1). We tested the labelling efficiency for all of these mutants, and performed time dependent aggregation kinetics for all labelled mutants in presence of WT Aβ42 in different ratios to test the extent of perturbation due to the presence of the fluorophore on the peptide. We also studied the fibril morphology and fluorophore inclusion for the end-stage fibrils of these peptides using cryoTEM and dSTORM. Whether oligomers of the labelled peptides can successfully mimic the neurotoxicity of WT Aβ42 remains to be addressed. The eighth residue of Aβ42 is a serine, a polar amino acid on the flexible N-terminal region, which is solvent exposed in all four monomers of a fibril plane (Figure 1). We introduce cysteine, another polar amino acid, in its place through site directed mutagenesis in order to covalently attach Alex fluorophores using maleimide chemistry. The S8C mutant yields high labelling efficiency of Alexa488 as shown in Figure 1. Aggregation kinetics for S8C in presence of different ratios of WT Aβ42 show an aggregation profile very similar to WT. Even with a high ratio of labelled:unlabelled peptide (1:1.5), the lag phase is not extended significantly compared to WT alone. The fibril morphology for S8C-Alexa488+WT (1:3.5) appears to be very similar to that of typical WT Aβ42. The fibrils are straight, with two filaments twisting around each other at regular intervals creating "nodes". Labelling Alexa-fluor at this position thus does not cause any significant perturbations of the aggregation kinetics nor fibril morphology when mixed with WT. S8C-Alexa488 also aggregates in the absence of unlabelled WT, which was studied by following the fluorescence quenching of Alexa488 upon aggregate formation. In this case, the lag phase is considerably extended, when the aggregation of 2.7 and 5.5 µM S8C-Alexa488 is compared to 5 µM WT (Figure 3). In this case, the fibrils are much shorter than those formed in presence of an equimolar amount of WT Aβ42. Y10C is the second mutant we created on the N-terminal region of Aβ42. We achieve high labelling efficiency in this case as well, and the aggregation kinetics don't show any significant perturbation even at high ratio of labelled:unlabelled peptide. However, when co-aggregating with WT Aβ42, we observe that the fibril morphology is perturbed for Y10C-Alexa647. Typical WT Aβ42 fibrils appear straight and rigid. However, Y10C-Alexa647+WT (1:3.5) fibrils appear flexible and appear as short filaments uncharacteristic of mature endstage fibrils. Y10C-Alexa647 also aggregates in the absence of unlabelled WT Aβ42. The fibrils formed from the labelled peptide alone display longer node-to-node distances but are so short that the periodical twists around the fibril length are few. Labelling Alexa-flour at this position thus seems to cause perturbation in fibril structure and morphology. Ser26 is positioned on the surface of the fibril core of the WT Aβ42 fibril. In a fibril plane of four monomers, two Ser26 are located where the two individual filaments meet and two Ser26 appear fully exposed to water [21] (Figure 1). Replacing a hydrophilic serine residue with a large moiety such as an Alexa fluorophor could thus potentially impose steric clashes in between the filaments leading to the formation of an altered fibril morphology. At high molar ratio, 1:1.5 S26C-Alexa647:WT, this variant displays a significantly extended lag phase compared to WT alone. The fibril morphology of S26C-Alexa647+WT (1:3.5) appears to be different than that of the typical WT Aβ42 fibrils. The fibrils are longer, indicating higher elongation rate and lowered rate of secondary nucleation. The node-to-node distance is longer and the fibril diameter is larger, indicating a change in the assembly of the two filaments in the fibril. This position of labelling thus serves as an example of significant perturbation caused by the Alexa fluorophores as regards both aggregation kinetics and fibril morphology. It has been shown that site directed mutagenesis of Val40 and Ala42 to polar serine residues does not cause any major perturbations of the aggregation mechanism and fibril morphology [25]. Hence we created mutants V40C and A42C to attach Alexa fluorophores at the respective positions. Val40 and Ala42 are also located where the two individual filaments meet (Figure 1), however unlike S26C, fluorophore-labelled V40C and A42C do not show significant perturbations of fibril morphology. As for S26C, the aggregation kinetics at high molar ratio, 1:1.5 variant:WT, is significantly perturbed, but less so or not at all at lower ratio. A possible explanation for the maintained fibril morphology at 1:3.5 molar ratio, is the possible segregation in the plane of four monomers in a fibril; one Alexa-labelled monomer may be in the positions where the labelled residues are facing the solvent, while the unlabelled WT monomers may be incorporated at the positions where Val40 and Ala42 are buried in the interface where the two filaments meet. It is interesting to note that the labelled S26C, V40C or A42C do not seem to aggregate in the absence of unlabelled WT Aβ42. Most likely, the labelled fibrils are unable to form fibrils of an alternative fold that contains both a stable hydrophobic core and exposed fluorophores at all four positions in a fibril plane. Finally, we note that the structures of the dye molecules Alexa488 or Alexa647 are very different, but this does not seem to affect fibril formation, which can be seen from V40C-Alexa488 and V40C-Alexa647 showing similar aggregation kinetics and fibril morphology (SI Figures S2 and S3) . Conclusions In conclusion, we have successfully created a palette of different Aβ42 mutants for covalent and site-specific attachment fluorophores using maleimide chemistry. All the positions were labelled successfully and the peptides labelled at various positions can be used in different studies depending on the purpose. It is clear that the position of the fluorophore label matters for keeping the perturbation of peptide behavior to a minimum, and we find the lowest perturbation of the aggregation kinetics and fibril morphology for the labelled S8C and V40C mutants. While S8C and Y10C aggregate independently, Y10C shows altered fibril morphology when labelled with Alexa. And while V40C and A42C show little fibril morphology perturbation, they do not aggregate independently when labelled with Alexa, which can be understood based on those positions being at the interface between filaments in two out of four monomers per fibril plane. The peptides reported here may be useful in mechanistic studies of Aβ42 aggregation and cellular uptake and all plasmids will be shared with the scientific community. Data Availability Statement: Data and plasmids will be shared upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2022-02-02T16:17:11.645Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "798cd040469975cb6142b111400d14cfaac6f697", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/3/1655/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4979e7c1031531d121ee80455533f5f23e01b6b1", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
240545420
pes2o/s2orc
v3-fos-license
MiR-323a-3p acts as a tumor suppressor by suppressing FMR1 and predicts better esophageal squamous cell carcinoma outcome Background Esophageal squamous cell carcinoma (ESCC) has unfavorable outcomes with the highest incidence seen in China. Accordingly, exploring effective molecular biomarkers is of great value. MicroRNAs (miRNAs) are posttranscriptional regulators of gene expression and modulate numerous biological processes in tumors. Our study aimed to identify prognostic miRNAs and investigate their role in ESCC. Methods Prognosis-related plasma miRNAs were detected by miRNA microarray and qRT-PCR. Functional assays and molecular mechanism studies were used to investigate the role of miRNA in ESCC. Results Over-expression of miR-323a-3p was associated with a favorable prognosis. MiR-323a-3p negatively regulated proliferation, migration, and invasion. Through biological predictions, the fragile X mental retardation 1 (FMR1) was found to be a potential target of miR-323a-3p. Further investigation revealed that miR-323a-3p directly targeted and suppressed FMR1. MiR-323a-3p and FMR1 mRNA, as well as miR-323a-3p and the FMR1-encoded protein FMRP, showed negative correlations. Luciferase activity of FMR1-3′-UTR, but not mutant counterparts, was decreased by mimic compared with that of the control. The compromised cell proliferation, migration, and invasion induced by transfection with miR-323a-3p mimic were rescued by transfection with a FMR1 expression plasmid. Tumors induced by miR-323a-3p overexpressed ESCC cells grew significantly slower in vivo and resulted in smaller tumor masses. Metastatic lung colonization was also inhibited by miR-323a-3p overexpression. Conclusions MiR-323a-3p was significantly associated with survival and acted as a tumor suppressor by inhibiting proliferation, migration, and invasion via the regulation of FMR1. MiR-323a-3p is a promising biomarker and may be a potential therapeutic target. Supplementary Information The online version contains supplementary material available at 10.1186/s12935-022-02541-x. course, and even responsiveness to treatments. In addition, due to the high invasiveness and early metastasis of ESCC, patients are usually diagnosed with locally advanced disease and a poor prognosis with 5-year overall survival (OS) of 20-34% [2,3]. In view of the particularity and suboptimal survival of patients with esophageal cancer in China, specific explorative studies regarding prognostic predictors are needed in order to choose proper individual treatments. MicroRNAs (miRNAs) are small noncoding RNAs of 18-22 nucleotides in length that bind to the 3′-untranslated regions (UTRs) of target mRNAs to post-transcriptionally regulate gene expression, mainly through gene silencing and inhibition of translation [4]. Accordingly, miRNAs have emerged as a new class of biomolecules with important roles in cellular functions and are regarded as potential biomarkers and valuable regulators of many physiological and pathological processes [5]. Emerging evidence has demonstrated that miRNAs play important roles in tumor processes and their aberrant expression is closely associated with tumor proliferation, differentiation, apoptosis, and metastasis [5]. Aberrant expression of miRNAs is also closely related to clinical outcomes in several types of carcinomas, including lung cancer [6], breast cancer [7], and liver cancer [8]. However, compared to these cancers, an understanding regarding the roles of miRNAs in ESCC is lacking. To identify miRNAs involved in ESCC carcinogenesis, we first performed miRNA microarray analyses to survey miRNA expression in plasma specimens of 12 patients with ESCC. The aberrant expression was further confirmed using quantitative real-time polymerase chain reaction (qRT-PCR) analysis of plasma specimens from 30 patients with ESCC. We discovered that low expression of miR-323a-3p was significantly associated with poor prognosis which suggested that miR-323a-3p might be a tumor suppressor. Through literature review, we found that miR-323a-3p was related to several cancers [9][10][11]. Like Chen et al. [11] showed that miR-323a-3p suppressed the growth of osteosarcoma cells. Overexpression of miR-323a-3p decreased the cell viability, colon formation and induced the apoptosis of osteosarcoma cells. Additionally, down-regulation of miR-323a-3p was significantly correlated with the poor survival outcome of the bladder cancer patients and miR-323a-3p was frequently downregulated in bladder cancer tissues and three cell lines [10]. These results also suggested the potential tumor suppressive role of miR-323a-3p in cancers. Thus, next it seems very important to clarify the function and mechanism of miR-323a-3p in ESCC. Furthermore, we determined the tumor suppressive function of miR-323a-3p was achieved by targeting the fragile X mental retardation 1 (FMR1) gene to inhibit proliferation, migration, and invasion, both in vitro and in vivo. Patients and plasma samples We collected plasma samples from 30 patients with locally advanced stage ESCC who received radiotherapy or chemoradiotherapy at the Cancer Hospital, Chinese Academy of Medical Sciences from 2007 to 2010. None of the patients had undergone surgery. Eligibility criteria were as follows: age between 18 and 70 years, Eastern Cooperative Oncology Group (ECOG) performance status (PS) ≤ 1 and no history of other malignances. All patients provided written informed consent and the study was approved by the Institutional Review Boards of Cancer Hospital, Chinese Academy of Medical Sciences. MiRNAs isolation Total RNA, which included the miRNA fraction, was extracted from serum samples using the mirVana PARIS Kit (Ambion, USA) and from cultured cells using Trizol reagent (Invitrogen) according to the manufacturers' instructions. The purity and concentration of all RNA samples were evaluated according to their 260/280 nm absorbance ratios, which were determined using a Nan-oDrop 2000C spectrophotometer (NanoDrop Technologies, Rockland, DE, USA). MiRNAs microarray and qRT-PCR analyses Expression levels of the miRNAs were determined using an AB TaqMan Human MicroRNA Array (Applied Biosystems), which included probes for 748 mature human miRNA sequences. Expression of miR-323a-3p was determined by the stem-loop qRT-PCR method. Briefly, complementary DNA (cDNA) was prepared using 2 µg total RNA as template and synthesized using a Super-Script ™ First-Strand Synthesis System for RT-PCR Kit (Invitrogen, USA) based on the specific stem-loop RT primers for miR-323a-3p or U6 (RiboBio, China). U6 small nuclear RNA was used as an internal control for miR-323a-3p. For quantification of FMR1 mRNA, total RNA was reverse transcribed into cDNA using a First-Strand cDNA Synthesis Kit (Promega, USA) using FMR1-specific primers. The FMR1 primer sequences were as follows: forward, 5′-CGG CAA ATG TGT GCC AAA GA-3′; reverse, 5′-ATG TGC TCG CTT TGA GGT GA-3′. The qRT-PCR was performed on an ABI Prizm 7300 Sequence Detection System (Applied Biosystems) using Light Cycler DNA Master SYBR Green I Mix (Roche, Switzerland) according to the manufacturer's instructions. All qRT-PCR reactions were performed in triplicate. The expression levels of miR-323a-3p and FMR1 mRNA were quantified using the 2 −ΔΔCt method and normalized to internal control levels. Cell transfection We selected KYSE-30, KYSE-150, and YES-2 cells for the functional studies. Synthesized miR-323a-3p mimics, inhibitor, mimic negative control (NC), and NC inhibitor were purchased from RiboBio (Guangzhou, China). Cell transfections were performed at a concentration of 50 nM using Lipofectamine 2000 (Invitrogen, USA) according to the manufacturer's protocol. Total RNA and protein were harvested 48 h after transfection and used for qRT-PCR and western blot analysis, respectively, as described. Cell proliferation, migration, and invasion assays Cell proliferation analysis was performed using transfected cells. The cells were seeded into 96-well plates at approximately 3 × 10 3 cells per well in 200 μL of medium. At 0, 24, 48, and 72 h post seeding, 100 μL 5% MTS reagent (Promega, USA) in phosphate-buffered saline (PBS) was added to each well and incubated for 2 h. Absorbance was then read at a wavelength of 490 nm using a microplate reader (Bio-Rad, USA). Cell migration and invasion assays were analyzed using Transwell chambers (Costar, USA) and Matrigel matrix (BD Biosciences, USA). Briefly, 1 × 10 5 transfected cells in 200 μL serum-free medium were seeded into the upper chamber with 100 μL of 2% Matrigel for migration or without Matrigel for invasion assays. The lower chambers were filled with 600 μL RPMI-1640 containing 20% FBS. The chambers were then incubated at 37 °C with 5% CO 2 for 8-20 h. The cells that migrated or invaded the membrane of the insert were fixed, strained, and analyzed. Cells in 10 random microscopic fields (100× magnification) were counted for each insert. Western blot analysis Cells were lysed with cold lysis buffer consisting of 1% NP-40 supplemented with a complete protease inhibitor tablet (Sigma, USA) for 30 min on ice. Equivalent amounts of total-protein extracts were separated by 10% SDS-PAGE and transferred to polyvinylidene difluoride (PVDF) membranes (Millipore, USA). The blots were blocked with 3% bovine serum albumin (BSA) for 1 h and the incubated with primary antibodies against fragile X mental retardation protein (FMRP, 1:500; Proteintech, China) and β-actin (control, 1:5000, Santa Cruz, USA) at 4 °C overnight. After washing with PBS three times, the membranes were incubated with their corresponding secondary antibodies at room temperature for 1 h. The membranes were washed again and then incubated with the chemiluminescence substrate. Photographs were taken using an ImageReader LAS-4000 (Fujifilm, Japan) and analyzed using Image-Pro Plus 6.0 software. Plasmid construction and luciferase activity assays The FMR1 3′-UTR target site sequence and sequence with an eight-nucleotide mutation in the miR-323a-3p target site were synthesized and cloned downstream of the luciferase gene in the pEGFP-C1 luciferase vector (Generay, China). The resulting vectors were named FMR1-3′-UTR-WT and FMR1-3′-UTR-Mut, respectively. Cells were seeded into 12-well plates and co-transfected with the constructed plasmids, internal control Renilla luciferase plasmid (pRL-SV40), and mimic or mimic NC using Lipofectamine 2000 (Invitrogen, USA). After 48 h, cells were harvested and luciferase activity measured using a Dual Luciferase Assay Kit (Promega, USA) and a Synergy H1 hybrid reader (Biotek, USA). Results were normalized to Renilla luciferase activity and the data expressed as relative luciferase activity. Experiments were performed in triplicate on three separate occasions. Retroviral infection The lentivirus for miR-323a-3p was purchased from GeneChem. The miR-323a-3p lentivirus was prepared and used to transfect cells according to the manufacturer's protocol. Analysis of in vivo tumorigenicity BALB/c nu/nu male mice, 4 weeks old, were purchased from Vital River for the in vivo tumorigenicity study. All experimental procedures were approved by the Institutional Animal Welfare Guidelines. Mice were injected subcutaneously in the hind legs with 2 × 10 6 cells in 0.2 mL. The size of the tumors was determined based on caliper measurements of subcutaneous tumor masses. Tumor volumes were calculated according to the formula 4/3πr 1 × r 2 2 (r 1 > r 2 ). Each experimental group included six mice and the experiments were repeated three times. In addition, mice were injected via tail veins with 1 × 10 6 cells in 0.2 mL. Mice were killed by cervical dislocation. Statistical analysis Differences between groups were estimated using the Student's t test and the Chi-square test. OS was defined as the time between the date of diagnosis and death. Local recurrence-free survival (LRFS) was defined as the time between the date of diagnosis to local or regional lymph node recurrence, last follow-up, or death. Survival data were analyzed using the Kaplan-Meier method and compared with the log-rank test. All experiments were performed at least three times. For the expression of miR-323a-3p, relative qualification (RQ) > 1 was regarded as overexpression and RQ ≥ 2 or ≤ 0.5 were considered significantly different. Data are expressed as mean ± SEM. Differences between groups were analyzed by two-tailed Student's t-test. Statistical significance is indicated by *P < 0.05, **P < 0.01, and ***P < 0.001 versus the relevant control. The data were analyzed using SPSS 19.0 software or Prism 7.0 software (GraphPad). Upregulation of miR-323a-3p correlated with good ESCC clinical outcome We selected 12 patients and divided them into two groups according to the difference in prognosis, Arm A and Arm B. The clinical characteristics for the two groups were well balanced ( Table 1). Survival of patients in Arm A was significantly better than that of patients in Arm B. The median OS in Arm A was not reached and in Arm B was 10 months (p < 0.001; Fig. 1a). The median LRFS was not reached in Arm A and was 6.5 months in Arm B (p = 0.003; Fig. 1b). Using miRNA microarray, miR-323a-3p was detected in the plasma of the two groups and found to be differentially expressed. The miR-323a-3p was upregulated in Arm A (RQ = 2.15), which was the group with good prognosis. (The other differentially expressed miRNAs were shown in Additional file 1). To confirm the microarray findings, qRT-PCR was performed using plasma from 30 patients with ESCC. These patients were also divided into two groups according to the differences in prognosis (Fig. 1c, d). The clinical characteristics were also well balanced between the two groups ( Table 1). The qRT-PCR results showed that miR-323a-3p was overexpressed in the group with good clinical outcomes (RQ = 2.1). The median value of miR-323a-3p expression in the 30 samples was defined as the cutoff value to classify low expression of miR-323a-3p (> 9.0167) and high expression of miR-323a-3p (≤ 9.0167). Downregulation of miR-323a-3p expression was found to be significantly related to a poor LRFS (p = 0.004; Fig. 1e). MiR-323a-3p inhibited cell proliferation, migration, and invasion in vitro Seven ESCC cell lines and human esophageal epithelial cell (HEEC)-NE2 cells were evaluated by qRT-PCR for miR-323a-3p expression levels. In most ESCC cell lines, except KYSE-30, the expression of miR-323a-3p was significantly lower than that in the NE2 cells (Fig. 2a). This suggested a tumor-suppressive role for miR-323a-3p in ESCC. To explore the functions of miR-323a-3p in ESCC, we transfected ESCC cell lines with miR-323a-3p mimics or mimic NC and inhibitor or inhibitor NC. As shown by the transfection efficiencies in Fig. 2b, c, miR-323a-3p expression levels were significantly increased by miR-323a-3p mimic and decreased by miR-323a-3p inhibitor, respectively. Next, we examined the effects of miR-323a-3p on cellular functions. Data from MTS assays were used to generate growth curves for the various KYSE-30 and KYSE-150 transfected cells. We determined the upregulation of miR-323a-3p resulted in significantly decreased cell proliferation compared with that of NC (Fig. 3a). We also counted the number of cells that migrated to the basal side of membranes in Transwell assays. The results revealed that migration and invasion capabilities were increased when miR-323a-3p expression was silenced. In contrast, both these capabilities were decreased by treatment with the miR-323a-3p mimic (Fig. 3b, c). These results suggested that miR-323a-3p was able to inhibit cell proliferation, migration, and invasion in vitro. FMR1 was a direct target of miR-323a-3p To further investigate the mechanisms by which miR-323a-3p acted as a tumor suppressor, we searched Tar-getScan, miRbase, and MIRDB and ultimately identified FMR1 as a potential target of miR-323a-3p. To determine whether FMR1 mRNA expression negatively correlated with miR-323a-3p expression in ESCC cells, both YES-2 and KYSE-30 cells were analyzed using qRT-PCR. The results indicated that overexpression of miR-323a-3p in ESCC cells resulted in downregulation of FMR1 mRNA compared with controls, while knockdown of miR-323a-3p resulted in upregulation of FMR1 mRNA (Fig. 4a). The relationship between miR-323a-3p expression and that of FMR1-encoded protein FMRP was also evaluated. Western blot analysis revealed an inverse relationship between the two. Specifically, FMRP expression was markedly decreased in miR-323a-3p mimic-transfected ESCC cells, but increased in miR-323a-3p inhibitor-transfected ESCC cells (Fig. 4b). It was concluded that miR-323a-3p repressed FMR1 at both the mRNA and protein level. To further investigate whether miR-323a-3p directly targeted the 3′-UTR of FMR1 as predicted (Fig. 4c), we performed luciferase reporter assays. Wild-type or mutant 3′-UTR of FMR1 was inserted downstream of a firefly luciferase reporter gene to create FMR1 3′-UTR-WT and FMR1 3′-UTR-MUT, respectively (Fig. 4d). Each firefly luciferase vector and renilla luciferase vector (pRL-SV40) were co-transfected with mimic or mimic NC into KYSE-30 cells. A significant decrease in relative luciferase activity of approximately 60-70% was observed in ESCC cells transfected with FMR1 3′-UTR-WT and miR-323a-3p mimic compared to those transfected with the mimic NC. However, the luciferase activity of ESCC cells transfected with the mutant FMR1 reporter did not change (Fig. 4e). These results suggested that miR-323a-3p could bind directly to the 3′-UTR of FMR1. MiR-323a-3p regulated cell functions through FMR1 To investigate the contribution of FMR1 to the biological functions of miR-323a-3p regarding proliferation, migration, and invasion, we examined whether reconstitution of FMR1 had an effect on miR-323a-3p-induced cell function changes. We co-transfected an expression vector carrying the FMR1 open reading frame (ORF) without the 3′-UTR or a control vector (C1) together with the miR-323a-3p mimic. Cell proliferation, migration, and invasion abilities were all rescued by the restored expression of FMR1 in the miR-323a-3p mimic-treated cells (Fig. 5a, b). Taken together, these results demonstrated the miR-323a-3p-mediated cell function changes in ESCC cells were at least partially caused by repression of FMR1. Forced expression of FMR1 abolishes the phenotype created by miR-323a-3p mimic transfection. a Cell proliferation ability was determined in ESCC cells transfected with miR-323a-3p mimic plus a control vector (C1), NC plus C1 and miR-323a-3p mimic plus FMR1 expression plasmid; b cell migration and invasion abilities were determined in ESCC cells transfected with miR-323a-3p mimic plus C1, NC plus C1 and miR-323a-3p mimic plus FMR1 expression plasmid. *p < 0.05, ***p < 0.001 MiR-323a-3p suppressed tumor growth and metastasis in vivo To assess the effects of miR-323a-3p on tumor repression in vivo, two pooled sets of ESCC cell lines were stably transduced with miR-323a-3p lentiviral vectors or negative controls. As expected, leti-miR-323a-3p-infected ESCC cells expressed a much higher level of miR-323a-3p than that of the control-transduced cells (Fig. 2d). Transduced KYSE-30 cells were chosen for subsequent studies. Both KYSE-30 cell pools were implanted subcutaneously into 6 mice for each group and tumor growth was monitored weekly. Tumors in the lenti-miR-323a-3p group grew significantly slower than those of the negative control group, resulting in smaller tumor masses (Fig. 6a, b). To further confirm the correlation between miR-323a-3p expression and tumor metastasis, both cell pools were separately injected into the tail veins of mice. After 3 months, the animals were analyzed for lung metastases. All mice in each group exhibited lung metastases; however, the numbers of lung metastases in the lenti-miR-323a-3p group were markedly lower than those in the control group (Fig. 6c). These results indicated that miR-323a-3p inhibited tumor growth and metastasis in vivo. Discussion Esophageal cancer is one of the most aggressive cancers worldwide and a leading cause of cancer-related deaths [12,13]. Squamous cell carcinoma (SCC) and adenocarcinoma (AC) are the two major histologic subtypes of esophageal cancer and their incidence varies among geographic areas [14]. Approximately 52% of world-wide esophageal cancers occur in China and ESCC accounts for approximately 95% of all those cases [1]. ESCC is clinically challenging, not only to treat, but also with respect to choosing efficient treatments. Accordingly, there is an urgent need for the proper predictors. MiRNAs are critical regulators of transcriptional and post-transcriptional gene silencing, which are involved in multiple developmental processes of cancers. The role of miRNAs in ESCC progression is becoming better recognized and there is increasing interest in identifying the key miRNAs involved in aggressive ESCC phenotypes in order to improve diagnosis, predict prognosis, and develop new therapeutic strategies [15][16][17][18][19][20][21][22][23][24][25]. Thus, we hypothesized that miRNAs might play an important role in the prediction of prognosis for ESCC and may be helpful in decision making for individual treatment options. In our current study, we found that some miRNAs expressed differently between the 2 arms (Additional file 1), the miR-323a-3p was one of the miRNAs. And elevated levels of miR-323a-3p in plasma were closely associated with better outcomes. Furthermore, we performed a series of experiments to determine the functions of miR-323a-3p in ESCC cells, which showed that enhancing miR-323a-3p expression suppressed cell proliferation, migration, and invasion in vitro and suppressed tumor growth and metastasis behaviors in vivo. Furthermore, enhancing the expression of miR-323a-3p inhibited each of these aggressive phenotypes by targeting FMR1. Taken together, these data indicate that miR-323a-3p served as a tumor suppressor in ESCC. Our study is the first to demonstrate the role of miR-323a-3p in ESCC, as well as to identify the target gene. MiR-323a-3p exists in a miRNA cluster in the chromosomal region 14q32.31, which is a region critical for placental growth and embryonic development. This suggests that miR-323a-3p may be useful in the early detection of ectopic pregnancy (EP) [26]. In addition, evidence suggests that miR-323a-3p plays important roles in several diseases, such as rheumatoid arthritis and Alzheimer's disease [19,27,28], and may be a potential therapeutic biomarker for these diseases. Otherwise, miR-323-3p is a candidate for diagnosing cardiomyopathy in Friedreich's ataxia patients [29] and a candidate biomarker for coronary artery disease in acute coronary syndrome patients [30]. However, there are few studies regarding the role of miR-323a-3p in cancer, which have focused on pancreatic ductal adenocarcinoma (PDAC) [31], bladder cancer [10], osteosarcoma [11], cervical cancer [32], and glioblastoma [33], not ESCC. As mentioned before, in PDAC, bladder cancer and osteosarcoma, miR-323a-3p is a tumor suppressor, whereas in cervical cancer and glioblastoma it is a tumor promoter. Importantly, this suggests that miR-323a-3p may play different roles and exhibit different antitumor effects when it comes to carcinogenesis of different types of cancer. Our data showed for ESCC that high expression of miR-323a-3p was related to better LRFS, inhibited cell proliferation, migration, and invasion in vitro, and suppressed tumor growth and metastasis in vivo. These results indicate that miR-323a-3p acted as an anti-tumor factor and could be predictive of a better prognosis in ESCC. Moreover, it was previously demonstrated that overexpression of miR-323a-3p increased the activity of a Wnt reporter and that the Wnt/β-catenin pathway is important for radiosensitivity [27,28]. Considering the patients in our study all received radiotherapy, we propose that miR-323a-3p was a potential factor for radiosensitivity, which should be further explored. Dysregulation of miRNAs has been shown to contribute to tumor initiation and progression by regulating target genes [5]. After using the iterative algorithms to predict target genes of miR-323a-3p, it showed that AFT6, KRAS, EED and FMR1 were all the potential target gene. In our current study, we demonstrated that FMR1 was the target gene of miR-323a-3p in ESCC. An inverse correlation was identified between the expression of miR-323a-3p and levels of FMR1 mRNA or FMRP and the biological functions of miR-323a-3p were determined by modulating FMR1. Yi et al. [34] also predicted and confirmed that miR-323a-3p directly binds to the 3′-UTR of FMR1, which is in agreement with our results. Based on the literature, FMR1 is definitely related to fragile X syndrome (FXS), which is an X-linked disorder [35]; however, few studies have focused on cancers. Patients with FXS are known to have a low risk of malignant cancers [36], but Kalkunte et al. [37] reported a case report of a boy with FXS developing a glioblastoma. However, the glioblastoma in this boy behaved as a cytogenetically or physically unusual tumor compared with that of glioblastomas in adult patients, perhaps due to the absence of FMRP. This indicates that FMRP has aggressive functions when it comes to cancers. Liu et al. [38] evaluated the genetic background of hepatocellular carcinoma (HCC) and investigated differential expression of genes. They validated that FMR1 is one of the genes upregulated in HCC. In addition, FMRP and FMR1 mRNA levels correlate with prognostic indicators of aggressive breast cancer, lung metastases probability, and triple negative breast cancer (TNBC) [39]. These results further demonstrate the cancer-promoting activity of FMR1 and the antitumor activity of miR-323a-3p. Furthermore, Luca et al. [39] demonstrated that FMRP binds mRNAs involved in epithelial mesenchymal transition (EMT). Thus, according to the functional biological effects of miR-323a-3p, especially regarding changes in migration and invasion, we hypothesized that EMT might also be a process that is influenced by the expression of miR-323a-3p and FMR1. This is reasonable as EMT has been shown in many cancers to promote cancer migration and intravasation from primary cancer during the metastatic cascade. It is this process by which epithelial cells lose their characteristic polarity and adopt a mesenchymal phenotype. Conclusions In summary, miR-323a-3p acted as a tumor-suppressive miRNA in ESCC. It was predictive of a better prognosis and may be helpful in deciding proper effective treatment choices. Further analysis showed that miR-323a-3p mediated tumor-related functions by targeting FMR1. To our knowledge, this study provides the first evidence to identify miR-323a-3p as a promising biomarker for ESCC.
2021-10-21T15:47:11.683Z
2021-09-09T00:00:00.000
{ "year": 2022, "sha1": "4898168168da6f96849a8051e256528580529339", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f1c2e31d0e91a696d3be5adf98d3cc051c07f4ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226441386
pes2o/s2orc
v3-fos-license
Aggressiveness of Phytophthora cinnamomi in avocado seedlings and effect of pathogen inoculum concentration and substrate flooding periods on root rot and development of the plants Agressividade de Phytophthora cinnamomi em mudas de abacateiro e efeito de concentrações de inóculo do patógeno e períodos de inundação do substrato na podridão de raízes e no desenvolvimento das plantas - The present study evaluated the aggressiveness of Phytophthora cinnamomi isolates and the effect of pathogen inoculum concentration and periods of substrate flooding on root rot and plant development. Twelve pathogen isolates were inoculated on the collar region of avocado seedlings with or without wounding. Only 31.3% of the inoculated plants without wounding developed lesions, compared to 100% of the plants with wounding, while the isolates showed different aggressiveness levels. Avocado seedlings had their substrate inoculated with 0, 0.1% and 1.0% (m/v) wheat seeds colonized by the pathogen per pot, and four periods of 0, 12 and 24 h substrate flooding were produced at fortnightly intervals. The assessed parameters were number of leaves per plant, collar diameter, plant height, leaf area index, visual severity percentage of infected roots, fresh mass (%) of diseased roots and dry mass of shoot and roots. Both pathogen inoculation and substrate flooding caused root rot; however, combination of these two factors produced an additional effect on disease symptoms. Root rot severity was superior to 50% when soilless substrate had 0.1% (m/v) P. cinnamomi inoculum and flooded for 12-24 h after inoculation, conditions that can be recommended for pathogenicity and disease control studies using potted avocado plants. Introduction One of the major diseases affecting avocado plants (Persea americana Miller) worldwide, both in the nursery and in the field, is Phytophthora root rot (PRR). Several Phytophthora species can cause the disease, with Phytophthora cinnamomi Rands being the most important of them and the most widespread in the avocado producing regions of the world (DANN et al., 2013). In Brazil, P. cinnamomi is the species usually associated with the disease (SUMIDA et al., 2009;PICCININ et al., 2016, RODRIGUEZ, 2016. In the State of São Paulo, Brazil, P. cinnamomi mating type A2 was the only species found so far associated with the disease, as confirmed by surveys carried out in avocado nurseries and commercial orchards located in 17 municipalities in the state (FEICHTENBERGER; SILVA, personal communication). PRR is a disease of fine feeder roots that impacts plant productivity and longevity. Infected feeder roots become brittle and turning black as the root tissue decay, which restricts water and nutrient uptake by the tree that leads to wilting, defoliation and branch-dieback, production of smaller fruits, and eventual tree death (REEKSTING et al., 2014;PICCININ et al., 2016). P. cinnamomi occasionally invades larger roots and sometimes causes a weeping canker at the base of the tree that may extend up the trunk for at least 1 m. These trunk cankers originate at, or just below, the soil line (DANN et al., 2013). High disease incidence and severity are frequently associated with poorly drained clayish soils subject to long flooding periods (DANN et al., 2013). Motile pathogenic spores, called zoospores, are produced and released from sporangia onto the surface of soil or substrate in the presence of water, being capable of infecting the roots. Besides high soil humidity, the disease depends on temperatures between 21 and 30°C to occur (PICCININ et al., 2016). In addition, soil flooding may damage the roots due to anoxia, predisposing plants to infection by the pathogen (BESOAIN et al., 2005). Avocado trees are sensitive to flooding, and plants with poor growth and yield, symptoms of nutrient deficiencies, branch-dieback, and even tree death may occur in flooded or poorly drained soils (SCHAFFER et al., 2013). Recommended measures for the management of this disease include using healthy seedlings planted in deep and well-drained soils and adopting appropriate irrigation to prevent prolonged flooding periods (PICCININ et al., 2016). Leaf spraying with potassium phosphite, combined or not with dolomitic limestone or plaster applied to the soil, allows partial recovery of diseased avocado plants (SILVA et al., 2016). Phosphites can also be injected into the stem of plants for disease control. However, scientific studies on the effectiveness of root rot control methods in avocado trees are relatively scarce in Brazil, mainly due to difficulties in reproducing disease symptoms in plants inoculated with the pathogen. Based on the need of investigation to standardize pathogen inoculation methodologies, the present study aimed to evaluate the effect of substrate flooding periods and pathogen inoculum concentration on the disease development in 'Hass' and 'Margarida' avocado seedlings, as well as knowing the aggressiveness of P. cinnamomi isolates from different producing regions in São Paulo State, Brazil, in 'Margarida' avocado seedlings. Material and Methods Experiments were conducted in 2018 at the facilities of the São Paulo Agency for Agribusiness Technology (APTA), Central-West Regional Center -Bauru, São Paulo State, Brazil (22º20'37"S and 49º03'12"W; 615 m altitude), under greenhouse conditions and controlled maximum temperature at 30ºC. The plants used in the experiments were non-grafted seedlings originated from 'Margarida' and 'Hass' avocado seeds, supplied by "Campo de Ouro" and "Jaguacy" farms, located in Piraju and Bauru municipalities, São Paulo State, respectively. Aggressiveness of Phytophthora cinnamomi isolates in avocado seedlings Inoculum of each one of the 12 evaluated P. cinnamomi isolates consisted of 5 mm-diameter mycelial disks removed from the edges of colonies grown for ten days at 25 ºC on Petri plates (9 mm diameter) containing potato-dextrose-agar medium (PDA). Approximately three months after sowing, 'Margarida' avocado rootstock seedlings were inoculated with a mycelial disk of the pathogen, which was attached with adhesive tape to the collar region of the plants, at a height of 2 cm from the substrate surface. The inoculation site was previously wounded or not with a histological needle at 2 mm depth. For the non-inoculated control, PDA disks without the pathogen were used. Lesion length in the collar of plants was visually assessed with a ruler, at 3 − 4 days intervals, during 50 days, when the lesions no longer increased in size in two subsequent evaluations. After measuring the length of the external lesions, the stem region was longitudinally opened with a switchblade to assess the length of the internal lesions. Data on external lesion in a 50 -day period were used to calculate the area under the disease progress curve (AUDPC). Once lesion evaluation was finalized, or the plants have died, the evaluated isolates were re-isolated from the lesions to confirm them as the causal agents of the disease. This process consisted in disinfesting lesion fragments from the plant collar with sodium hypochlorite at 0.5%, for one minute, followed by washing with sterile distilled water, dividing them into smaller fragments and plating the small fragments on agar-water medium (AW). Experimental design was in randomized blocks with four replicates, and one plot was represented by one plant. Severity data were transformed into√x and underwent parametric analysis. Means were compared according to Tukey's test (p = 0.05), using the computer program Systems for Analysis of Variance -SISVAR (FERREIRA, 2019). One of the most aggressive isolates was selected for subsequent study of the effect of substrate flooding and inoculum concentration on the disease progress and plant development. Effect of Phytophthora cinnamomi inoculum concentration x flooding Inoculum of the isolate LRS 18/11, one of the most aggressive isolates in the previous experiment, was produced in wheat seeds by adding an equal volume of distilled water to the seeds and autoclaving the mixture for two consecutive days, during 30 minutes, at 121 ºC. Twenty mycelial disks, cultured in PDA, were transferred to each bag containing 250 g autoclaved wheat seeds, incubated at 25 ºC, in the dark, during four weeks, and weekly shaken to homogenize grain colonization (LEONI; GHINI, 2003). Minimum, mean and maximum temperatures in the greenhouse were weekly evaluated to calculate the averages of temperatures during the experiment, from June 11 th , 2018 to August 10 th , 2018. For seedling inoculation, each pot received 0, 0.1 and 1.0% mass/volume (m/v) wheat seeds colonized by the pathogen, burying the inoculum in five equidistant holes at 5 cm depth (GALLO-LLOBET et al., 1999). Plants were daily watered to reach the substrate humidity at field capacity. One day after inoculation and at four biweekly intervals, pots were individually placed in 12-liter buckets, and flooded or not with water, keeping the substrate during flooding completely covered by water for periods of 12 and 24 hours. On the day of inoculation and two months after it, the following parameters were evaluated: number of leaves per plant; collar diameter at 1cm above the substrate surface, measured with a pachymeter; plant height, obtained with a measuring tape, and leaf area index (LAI), expressed as cm 2 /seedling and obtained with a leaf area integrator AccuPAR, model LP-80 PAR (Photosynthetically Active Radiation). The difference for each variable of tested plant was calculated, relative to a period of two months after inoculation. At the end of the experiment, evaluations also included visual severity (%) of diseased roots, determined by estimating the percentage of necrotic root system area; fresh mass (%) of diseased roots and dry mass (g) of shoot and roots. Dry mass was obtained after drying the materials for three days in electric oven, at 70 ºC, until constant mass was reached. The time for occurrence of the first leaf wilting symptoms after inoculation was recorded and also the incidence of wilting or dead plants showing all dry leaves at the end of the experiment. After plant evaluations, ten necrotic root fragments per plant were disinfested with sodium hypochlorite at 0.5%, for one minute, washed in sterile distilled water, divided into smaller fragments, which were plated onto carrot-agar selective medium, containing carbendazim (0.01 g/L), nystatin (0.025 g/L), rifampicin (0.01 g/L) and ampicillin (0.25 g/L), to confirm the presence of the root rot causal agent Rev. Bras. Frutic., Jaboticabal, 2020, v. 42, n. 5: (e-352) Experimental design was in randomized blocks in factorial arrangement (three flooding periods x three concentrations of pathogen inoculum), including five replicates; each plot was represented by one plant. Data were transformed into √x or √x + 0.5 and means were compared according to Tukey's test (p = 0.05), using the computer program SISVAR (FERREIRA, 2019). The experiments with the two avocado cultivars were carried out simultaneously but analyzed separately. Data of the two cultivars were not compared each other because the plants were of different ages and kept in pots of different sizes, conditions that could interfere in the performance of the plants in relation to the evaluated parameters. Aggressiveness of Phytophthora cinnamomi isolates in avocado seedlings Visual symptoms of necrotic lesions in the collar region of plants previously wounded or not could be noticed from the third day of inoculation with P. cinnamomi isolates. Of the plants inoculated without previous wounding, only 31.3% developed symptoms and those inoculated with the isolates LRS 14/11 and LRS 133/11 did not show any symptom, which evidences the occurrence of differences in the pathogenicity of the isolates evaluated, and also the importance of the wounds for the occurrence of typical lesions of the disease. Due to the non-occurrence of the disease in most of the plants inoculated without previous wounding, the isolates were not compared with each other for aggressiveness when this inoculation methodology was used and data are not presented. All 12 P. cinnamomi isolates evaluated induced typical lesions of the disease when plants were previously wounded and differences in the aggressiveness of the isolates were observed, as expressed by lesion length in the collar region of the plants (Table 1). The smallest external and internal lesions and the lowest AUDPC were obtained in plants inoculated with the isolate LRS 133/11, lesions that did not differ in length from those of the control treatment. The greatest external lesions and AUDPC were obtained in plants inoculated with the isolates LRS02/11, LRS17/11, LRS15/11, LRS06/08, LRS01/11, LRS69/11, LRS06/12, LRS14/11 and LRS18/11, while the largest internal lesions were observed in plants inoculated with the isolates LRS02/11, LRS17/11, LRS06/08, LRS01/11, LRS69/11, LRS06/12, LRS14/11 and LRS18/11. Mean length of 5cm was obtained in external lesions caused by all the P. cinnamomi isolates, with the exception of the isolate LRS 133/11; comparing the same lesion, the mean length of the internal lesion was 28% superior to that of the external lesion. Most of the reported stem or collar inoculation methods with P. cinnamomi require previous wounding of the host tissues. They include drilling into host tissue and inserting plugs of mycelial agar (TIPPETT et al. 1983), incising the bark and inoculation with colonized Miracloth discs (PILBEAM et al. 2000) or colonized mycelial plug (BUNNY et al. 1995;NAIDOO et al.;RODRIGUEZ, 2016). In Eucalyptus marginata, as in the present study, all plants were infected when previously wounded but not all were infected when unwounded (O'GARA et al., 1996). These results, whilst supporting the widely accepted view that wounds are an important avenue for host tissue invasion by the pathogen, they also demonstrated that wounds are not essential for successful invasion through tissue. Regarding the progress of the invasion of avocado woody stems by P. cinnamomi, the inner bark and outer layer of wood are invaded, killing the phloem and xylem. On cutting through the bark a distinct reddish brown to brown discoloration of the underlying wood is evident (DANN et al., 2013). This can explain the greater internal size of lesions in relation to the external lesions in the inoculated seedlings in the present study (Table 1). In some hosts, as E. marginata, the lesions are not visible from external examination of the stems; however, after removal of the periderm layer, dark lesions are evident against the vivid green of the healthy phloem (O'GARA et al., 1996). Considering inoculation without previous wounding, the mean length of external and internal lesions was 6.0 and 7.2 cm, respectively, for symptomatic plants (data not shown), close to that observed in plants inoculated with wound (Table 1). RODRIGUEZ (2016) reported that differences in the diseased area in the collar region of avocado seedlings 'Toro Canyon' and 'Duke 7' were not observed in plants inoculated with or without previous wounding, using similar inoculation methodology, supporting the idea that after infection, the development of the disease is similar, regardless of the penetration of the pathogen having been favored by wounding. Symptoms of leaf wilting were observed from six days after inoculation, and plant death from 15 days of inoculation. Four plants died as a result of the disease in inoculations without previous wounding while three plants died in inoculations with previous wounding, and the number of dead plants per isolate was not greater than one. Reisolation of the pathogen from the lesions in AW medium was possible for all 12 inoculated isolates, confirming their pathogenicity to avocado plants. As in the present study, Belisle et al. (2019) also identified significant differences in virulence within California, USA, P. cinnamomi isolates from avocado. Evaluating the pathogen inoculation methodology in the collar of avocado seedlings, Rodriguez (2016) also found differences in the aggressiveness of two pathogenic isolates. Despite the occurrence of differences between isolates in terms of aggressiveness, isozyme, microsatellites and mitochondrial haplotypes studies on P cinnamomi populations from different regions of the world have revealed that, in general, genetic and genotypic diversity among isolates is low in most regions (OLD et al., 1984(OLD et al., , 1988LINDE et al., 1997;MARTIN;COFEY, 2012;ENGELBRECHT et al., 2017). In particular, in Australasian and South African populations, isozyme and microsatellites analysis indicated that low levels of spacetemporal genetic diversity, higher frequency of A2 mating type and the absence of sexual reproduction, are common features among P. cinnamomi isolates (OLD et al., 1984(OLD et al., , 1988LINDE et al., 1997, ENGELBRECHT et al., 2017. Effect Phytophthora cinnamomi inoculum concentration x flooding 'Hass' avocado seedlings inoculated with 1.0% (m/v) pathogen concentration and not subjected to flooding showed greater visual severity (%) of diseased roots than non-inoculated plants (Table 3). In the presence of the pathogen and regardless of the flooding period, there were no significant differences between the inoculum concentrations for the evaluated plant parameters, except LAI in 'Hass' plants subjected to 12h flooding (Table 2), possibly because P. cinnamomi has a short generation time and a high reproductive capacity. Inoculum can increase from low, often undetectable levels, to high levels within a few days, particularly when soils are warm, moist and well aerated, and food bases (feeder roots) are in abundance. The process of zoospore production can occur in less than 48 h and hence the pathogen has the capacity to produce millions of spores in a short period (DANN et al., 2013). The interaction between flooding period and inoculum concentration was significant for the number of leaves in 'Margarida' plants (p = 0.023), which showed reduced number of leaves due to flooding associated with the presence of the pathogen, but such a reduction was not observed in the absence of pathogen or flooding (Table 2). A significant interaction (p < 0.05) was also observed for both studied avocado cultivars considering the percentage of diseased root mass and visual severity of diseased roots (Table 3). For the cultivar 'Margarida', there was no significant increase in the percentage of diseased roots in the absence of flooding, regardless of the inoculum concentration; however, 24 h flooding led to a higher percentage of diseased roots in the presence of the pathogen, while with 12 h flooding a significant increase in the disease was observed only in the proportion of diseased root mass at 0.1% of inoculum concentration (Table 3). 'Hass' plants had higher percentages of diseased roots with the association of flooding and pathogen inoculum, showing no differences in the percentage of diseased root mass in the absence of flooding or in the absence of the pathogen (Table 3). In general, regardless of occurrence of interaction between flooding period and inoculum concentration, these two stressing factors compromised the development of 'Hass' and 'Margarida' avocado seedlings (Tables 2 and 3). 'Margarida' avocado seedlings not inoculated with P. cinnamomi had their development negatively affected by 12 h and/or 24 h substrate flooding and showed significant differences from the treatment without flooding for leaf area index (LAI) and percentage of diseased roots, considering both the proportion of root mass and the visual severity (Tables 2 and 3). For the cultivar 'Hass', greater visual severity of diseased roots due to flooding was also found (Table 3). Root rot in plants not inoculated by the pathogen is most likely due to the partial anoxia of roots during substrate flooding periods, since avocado feeder roots are extremely sensitive to anaerobic conditions (SCHAFFER et al., 2013). Waterlogged conditions decreased biomass production with smaller leaf area (DOUPIS et al., 2017). According to BESOAIN et al. (2005), in 'Mexicola' avocado seedlings subjected to six cycles at fortnightly intervals of 0, 12, 24, 48 and 96 hours of flooding, the percentage of diseased roots reached 11.3% root mass after 48 h flooding. In the present study, with 12 h flooding such percentage reached 11.9% in 'Hass' and 33.1% in 'Margarida' avocado seedlings (Table 3). In Colombia, oxygen deficit in the soil is reported as one of the most frequent causes of death of avocado trees in the field, as a direct consequence of planting in flooded soils . Waterlogged or flooded soils may result from high rainfall, river overflow, elevated water tables, inadequate drainage and improper irrigation management (PANDEY et al., 2010). Avocado is a floodsensitive species and flooding exacerbates the effect of PRR (REEKSTING et al., 2014). According to Bensoain et al. (2005), the percentage of root rot reached 5.5% in 'Mexicola' seedlings not inoculated with the pathogen and subjected to six fortnightly flooding periods of, on average, 0, 12, 24 and 48 hours. In the present study, root rot increased with the inoculation of the pathogen, reaching 43.7% of the root system. 'Mexicola' avocado seedlings inoculated with the pathogen and exposed to six cycles of 96h flooding showed 100% diseased roots (BENSOAIN et al., 2005). The association of flooding and P. cinnamomi also had a positive synergistic effect in reducing the growth of the avocado rootstock in Kenya, resulting in reduced stem diameter, leaf area, plant heights, plant fresh weights and plant dry weights (SHIRANDA, 2018). Root infection by P. cinnamomi increases in the presence of free water, and a synergism exists between flooding and PRR outbreaks (PLOETZ; SCHAFFER, 1989;FARROW et al., 2011). The presence of excess water in soil initiates sporangia production and facilitates free movement of released zoospores towards roots, leading to high rates of infection (NIELSEN, 2016). Short periods of soil saturation with aerated water favors P. cinnamomi, whereas prolonged periods of soil saturation lead to anoxia which will damage or kill avocado roots and inhibit the pathogen (NIELSEN, 2016). The first wilting symptoms were noticed from 26 days after inoculation in 'Hass' and from 32 days after inoculation in 'Margarida' plants. At the end of the experiment, symptoms were detected in 26.7% of 'Margarida' and 31.1% of 'Hass' plants, of which, one (2.2%) and three (6.7%) were dead, respectively. In the 'Hass' cultivar, wilting did not occur in non-inoculated plants but was detected in 46.7% of inoculated plants; however, in the absence of flooding, only 20% of inoculated plants showed wilting. In the 'Margarida' cultivar, wilting symptoms only occurred in plants subjected to a certain flooding period, which accounted for 40% plants, while only 10% plants subjected to flooding had wilting in the absence of the pathogen, evidencing that wilting symptoms may occur due to the presence of the pathogen or the occurrence of flooding, and that symptoms are intensified when both factors are associated. Rev. Bras. Frutic., Jaboticabal, 2020, v. 42, n. 5: (e-352) Besoain et al. (2005) reported that, regardless of the soil flooding period, inoculated 'Mexicola' avocado plants presented wilting, reduced growth, smaller number of leaves and reduced leaf surface, which are symptoms always associated with the presence of partial or total rot of absorbing roots, eventually determining the death of plants. However, avocado trees can often tolerate a degree of root rot with no obvious effects on above-ground tree health (PLOETZ; PARRADO, 1988). Reduced photosynthesis, transpiration and stomatal conductance can also be detected in root rot-affected trees before visible symptoms of disease become evident (WHILEY et al., 1986;SCHAFFER, 1989). P. cinnamomi was recovered from 63% root fragments from inoculated 'Hass' and 'Margarida' plants, via isolation in selective culture medium, showing no differences in the cultivars as a function of inoculum concentration and flooding periods. The pathogen could not be isolated from non-inoculated plants, evidencing the original health of seedlings and the absence of subsequent contamination among plants of the remaining treatments. During the two months of experiment, minimum, mean and maximum temperatures inside the greenhouse were 12.8, 21.3 and 29.9 °C, respectively, within the adequate range for the occurrence of the disease. Disease develops optimally at temperatures between 19-25 ᴼC and declines at > 30 ᴼC and < 12 ᴼC. Root rot is most severe at lower temperatures, where P. cinnamomi grows better than avocado trees, and is much reduced at higher temperatures that favor the host (DANN et al., 2013). According to Piccinin et al. (2016), the disease is completely inhibited at temperatures above 33 °C. Combined stresses can act synergistically to reduce crop production such as root rot, poor aeration, and waterlogging (RAMÍREZ-GIL et al., 2020). Knowledge pertaining to the physiological and growth tolerance of avocado rootstocks to P. cinnamomi and flooding is limited. Differences in tolerance to flooding were already observed in several avocado rootstocks, although, in general, they were not related to tolerance to P. cinnamomi. The South African rootstock selection R0.06 exhibits superior tolerance to flooding and PRR, but not when these two stresses are combined (REEKSTING et al., 2014). Therefore, rootstock selection should consider adaptation to several combined stresses which suggests that local edaphoclimatic conditions require specific tests of rootstock selection. In a recent evaluation of eleven native avocado genotypes in Colombia, different levels of adaptability to drought and flooding conditions were observed, as well as some degree of resistance to PRR. In general, West Indian-type genotypes showed better adaptability to higher soil moisture conditions and showed lower values of PRR, while Guatemalan-type genotypes showed advantages for drought conditions and intermediate tolerance to PRR and hypoxia/anoxia (RAMÍREZ-GIL et al., 2020). In another study with 'Fuerte', 'Puebla', 'Pinkerton' and 'Booth7' rootstocks, 'Fuerte' and 'Puebla' showed greater tolerance to P. cinnamomi and flooding treatments (SHIRANDA, 2018). Conclusions The avocado P. cinnamomi isolates showed virulence variability and the lesion length produced by the pathogen in the collar region of inoculated avocado seedlings previously wounded proved to be a simple and effective technique for assessing the aggressiveness of P. cinnamomi isolates. P. cinnamomi at concentration of 0.1 -1.0% (m/v) in the substrate and fortnightly 12 -24 h substrate flooding compromised the health of the root system of avocado seedlings. However, combination of the two stresses brings about an additive effect to the disease symptoms. Thus, in pathogenicity and disease control studies using soilless potting media, flooding period after inoculation is recommended for obtaining high rates of P. cinnamomi infection. Despite the limitations to extrapolate these results to commercial conditions, avoiding waterlogged conditions in avocado orchards is therefore of fundamental importance in maintaining healthy and productive trees.
2020-10-28T19:11:19.810Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "4ed73491054ef54ee710e0624fed66ef521a3df8", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbf/a/VDJLw74gbWx7tvHTBrDctCh/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c7da10018a3ee242421145707f4ee049d9f2a95c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
9342878
pes2o/s2orc
v3-fos-license
Medical-Ethical Principles on Xenotransplantation The spectacular advances in the field of allogeneic organ transplantation achieved in the course of the last 30 years have made it possible to improve not only the life expectation but also the quality of life of a large number of patients. Unfortunately, transplantation surgery has become a victim of its own success: in all countries the increasing demand has led to a considerable shortage of donor organs and consequently to increasingly long waiting lists. As a result, a certain number of patients who could have been helped by an organ transplant are dying. Understandably, alternatives to allotransplantation are constantly being sought. One of these alternatives could be xenotransplantation, i.e. the transplantation of live cells, tissues or organs of one species into the organism of another species. Although in the years from 1990 to 1995 some scientists expected that within short time transplantations of animal organs into humans could be undertaken with real chances of success, today the majority are in fact more pessimistic. All the experimental organ xenografts that have been carried out up till now have proved unsuccessful in the short or medium term. In fact, this new biotechnology poses complex problems, in particular problems of an infectiological, immunological and physiological nature. There are as yet no answers to many of the questions that arise in this connection. It therefore seems to be appropriate for the Swiss Academy of Medical Sciences to define how one should approach this new biotechnology from the point of view of medical ethics. In this connection, respect for the human personality and the question of biological safety have to be given first priority: the risks to which not only the recipients but also those who come into contact with them are exposed have to be kept to the minimum. Man’s obligations towards animals also have to be taken into account. As a matter of fact, it is imperative to reflect on the following fundamental questions: – Bearing in mind our cultural and moral values, is the transplantation of animal organs, tissue or cells in humans desirable or acceptable? – What are the necessary ethical justifications for such a procedure? – What restrictions have to be established? – What priorities can a highly developed country such as ours reasonably set in the field of public health? The spectacular advances in the field of allogeneic organ transplantation achieved in the course of the last 30 years have made it possible to improve not only the life expectation but also the quality of life of a large number of patients.Unfortunately, transplantation surgery has become a victim of its own success: in all countries the increasing demand has led to a considerable shortage of donor organs and consequently to increasingly long waiting lists.As a result, a certain number of patients who could have been helped by an organ transplant are dying.Understandably, alternatives to allotransplantation are constantly being sought.One of these alternatives could be xenotransplantation, i.e. the transplantation of live cells, tissues or organs of one species into the organism of another species. Although in the years from 1990 to 1995 some scientists expected that within short time transplantations of animal organs into humans could be undertaken with real chances of success, today the majority are in fact more pessimistic.All the experimental organ xenografts that have been carried out up till now have proved unsuccessful in the short or medium term.In fact, this new biotechnology poses complex problems, in particular problems of an infectiological, immunological and physiological nature.There are as yet no answers to many of the questions that arise in this connection. It therefore seems to be appropriate for the Swiss Academy of Medical Sciences to define how one should approach this new biotechnology from the point of view of medical ethics.In this connection, respect for the human personality and the question of biological safety have to be given first priority: the risks to which not only the recipients but also those who come into contact with them are exposed have to be kept to the minimum.Man's obligations towards animals also have to be taken into account.As a matter of fact, it is imperative to reflect on the following fundamental questions: -Bearing in mind our cultural and moral values, is the transplantation of animal organs, tissue or cells in humans desirable or acceptable?-What are the necessary ethical justifications for such a procedure?-What restrictions have to be established?-What priorities can a highly developed country such as ours reasonably set in the field of public health? These questions are perhaps not addressed primarily to the doctor, but rather to the philosopher, the ethicist and the theologian.In the final analysis it is the task of society to provide the answers and that of the politicians to decide on them.The aim of the Academy, however, is to open the discussion of these questions (see also Chapters 2 and 5.2).On the basis of the Article 24decies of the Constitution, which was accepted in accordance with the popular referendum of 7.2.99, a law on transplantations is at present drafted.This law will apply to any use of human or animal organs, tissues or cells that are destined for transplantation into humans.It is therefore time to open a broadbased debate, so that after it has been fully informed society in general will be in the position to express its opinion on the aforementioned ques- Foreword A first version of the medical-ethical principles concerning xenotransplantation were published by the SAMS, for consideration, in the Bulletin des Médecins Suisses (1999; 80:1896-1911 It has to be pointed out that up till now no state and no international organisation has proposed a moratorium on clinical trials in xenotransplantation, but that in all countries they require official permission. The medical-ethical principles on xenotransplantation that are formulated here by the Swiss Academy of Medical Sciences have to be constantly adapted to new medical-technical knowledge acquired from the fields of basic research and applied research. A. Definitions The term xenotransplantation (xenogeneic transplantation or xenografting) covers different technologies, the aims of which are to replace inadequate organs, tissues or cells of one species with a live transplant from another species.Xenotransplantations are described as concordant when the two species are phylogenetically closely related (e.g.apes and humans), and as discordant when they are distantly related (e.g.pigs and humans). Xenotransplantation of organs The organ of a donor animal that has been genetically modified in order to prevent a hyperacute rejection of the transplant is implanted in a recipient of another species, by the creation of anastomoses between the blood vessels of the donor organ and the blood vessels of the recipient.In this way the organ is perfused with the blood of the recipient.The transplanted organ, e.g.heart, liver or kidney, has to take over all the functions of the organ that it replaces. Xenotransplantation of tissues A piece of live tissue, e.g.skin, cornea or bone is tranplanted from one species to another.Secondary revascularisation takes place, starting from the recipient tissue. Xenotransplantation of cells Here, a differentiation is made between two types of transplantation: With the first type the donor cells (genetically modified or not modified), e.g.bone-marrow cells, pancreatic cells or foetal brain cells, are injected, at a well vascularised site, into the organism of another species, where they release hormones and other factors that make it possible to compensate the insufficiency of certain organs or tissues (diseases of the central nervous system, diabetes etc.). With the second type, the cells (often genetically modified) of a foreign organism are encapsulated in semipermeable membranes.In this way they are protected against antibodies and immunocompetent cells.The secreted molecules are however able to diffuse through the membrane. This new type of treatment was developed in order to prevent the rejection of animal cells.The implant can be removed from the organism at any time. Extracorporeal perfusion The plasma or blood of a patient is perfused through an organ of another species or through a bioartificial organ that contains live animal cells enclosed in a permeable capsule.These two procedures are carried out in order to bridge, for a limited period, a sometimes reversible organ failure or the waiting time before a planned allogeneic transplantation. The term xenotransplantation covers neither those products of animal origin that contain only molecules (e.g.porcine insulin) nor tissue grafts consisting of inactivated cells (e.g.porcine heart valves). C. Identification of the risks Every transplantation, whether allogeneic or xenogeneic, exposes the recipient to an immunological risk and to the risk of infection.In order to prevent rejection due to an immune reaction, a high level of immunosuppression is induced by means of drugs, which however greatly impairs the defence against infection.This effect is a significant cause of the morbidity following allogeneic transplantations and the mortality after the organ xenograft. One option for xenotransplantation is to take primates as donors.Such a concordant transplantation is however restricted to baboons, as anthropoids display a highly developed socialisation, are difficult to breed and are also threatened with extinction.Because of the close genetic relationship there is, however, a considerable risk of infection (HIV and the Ebola virus have been passed from apes to humans).Furthermore, it is not possible to breed baboons that are entirely free from germs.For these reasons a discordant animal species has been chosen as organ donor, namely the genetically modified (transgenic) pig, to which one or more human genes have been transferred in order to prevent hyperacute rejection.The porcine species was chosen because the risk of infection is less than with the use of apes, because it is possi-ble to breed pathogen-free animals and also because the organs of the pig are about the same size as those of humans.On the other hand, up until now the pathogens specific for the pig have caused disease in humans only in exceptional cases.Endogenous porcine retroviruses, which as has been shown can be transferred to human cells and which exhibit a high rate of mutation and recombination, give greater cause for concern.Although the studies that have been carried out up until now have not revealed any evidence of pathogenicity of these retroviruses, it cannot be excluded that this could appear a considerable time after the infection.Detection tests by means of which an infection with certain viruses of this type can be proven have recently become available.However, many of these viruses can still not be identified and therefore constitute a risk of disease that has to be taken seriously, especially in patients whose immune system is suppressed following a xenotransplantation.The possibility that such a patient could infect those around him cannot be discounted and is in fact horrifying to some people. The danger of physiological and biochemical intolerance is also a factor that has to be considered with organ xenotransplantation.Even though one day it may be possible to achieve a lasting tolerance of xenotransplants in humans, it is not certain that the physiological and biochemical processes of the animal organ are sufficiently compatible with that of the recipient to guarantee its optimal functioning in the long term. D. Possible advantages of xenografts There are some indisputable advantages of xenotransplantation that have to be mentioned: -the large number of available organs; -shortening of the waiting times; -the possibility of planning operations in advance; -the possibility of testing the transplants more thoroughly before the operation; -reduction of the risk of transfer of human pathogens; -the decreased danger of illicit trade in human organs. III. Legal regulations, guidelines and recommendations As in most European countries, the USA and Canada, also in Switzerland many reports have been published with recommendations or guidelines on xenotransplantation.Until relevant laws have been passed there are at present only provisional regulations in force.All the experts stress the international dimension of the problem and recommend that the necessary measures be harmonised.For example, steps must be taken to ensure that neighbouring countries do not introduce contradictory laws regarding licensing procedures or epidemiological monitoring.On 8.10.99,following the corresponding decision of the Upper House of the Swiss Parliament, the Lower House approved a modificaton to the Federal Resolution on the control of blood, blood products and transplants.The new Article 18a allows -with the corresponding approval of theresponsible Federal authority -the implantation of transplants of animal origin in humans (see Attachment I). IV. Medical-ethical principles in the clinical trials stage The clinical trial is an essential step in the development of xenotransplantation.Only by means of clinical trials is it possible to define the risks and to develop strategies to prevent them.However, strict medical-ethical principles have to be observed. Essential criteria concerning humans as recipients -Respect for the personality of the individual in accordance with the guidelines of the SAMS (Attachment I/1.4).-Observance of all appropriate measures for minimising the risk of infection: use of organs, tissues and cells that are free from known pathogens; preoperative and postoperative tests; short-, medium-and long-term checks. -control of the rejection reaction. -Assurance of a lasting physiological and functional compatibility of the xenotransplant and its survival must at least promise a lasting improvement of the patient's quality of life. In the case of cell transplants that are encapsulated in a membrane, the problem of the rejection reaction still does not seem to be permanently controllable, and there is probably a risk, although slight, of infection.With bioartificial organs, with the extracorporeal perfusion of organs and with cells contained in a membrane capsule, the limited life of the transplant is acceptable, as the procedure can be repeated several times. Essential criteria concerning the animal as donor -The rules of good practice for the reproduction and breeding of animals that are free from known germs.-The well-being of the animals must be assured and they must not be exposed to any unnecessary suffering (see Attachment I, 1.4).-The sequential removal of organs from the same animal is not allowed.-Primates may not be used as potential organ donors for humans in view of the increased risk of infection and the difficulties with the breeding of these species.Depending on the advances in scientific knowledge, suitably justified exceptions may be allowed.-These rules are based on the assumption that the use of animals as donors for humans is accepted in principle. Essential criteria in regard to society -The use of genetically modified or cloned animals as donors of organs, tissues and cells to the advantage of humans must be justified by a genuine therapeutic benefit for the recipient.-In order to prevent the appearance of diseases, strict regulations regarding biological safety are laid down.-The economic aspects are taken into account from the outset and attention is paid to ensuring that in the development of xenotransplantation the interests of society at large are not prejudiced by the financial interests of industry. Criteria for the selection of patients In the selection of patients for xenotransplantation, during the experimental stage all the following preconditions must be met: -The patient is suffering from an incurable disease and xenotransplantation is the only therapeutic possibility or there is no suitable human donor organ available.-The aim of the xenotransplantation of organs, tissues or cells must be to improve the quality of life or the life expectancy of the patient better than any other known treatment.-Children may not be considered as recipients, except within the framework of a trial in connection with a children's disease. Xenotransplantation must however continue to be considered as the "ultima ratio". Declaration of informed consent In the experimental stage of xenotransplantation, and all the more if this treatment is one day introduced as normal procedure in clinical practice, it is no longer the recipient alone who is involved; the recipient must declare his consent that the persons close to him (spouse, partner, children etc.) be fully informed, in particular concerning the demands and the risks associated with xenotransplantation.The recipient must be convinced of his moral obligation, after the transplantation has been carried out, to adhere to the instructions contained in the protocol, in his own interest and in that of those around him. Data register With the start of the clinical trials a national data base must be created for the data that are obtained, in collaboration with an international data base and in accordance with the law on data protection. The setting up of a data base is today very important, as the multicentre studies that are essential for estimating the risks require standardised procedures, especially for the monitoring for infectious diseases, which can appear a very long time after the contagion (endogenous retroviruses, prions etc.). All the data must be available to all the participating countries, in order to be able to very rapidly detect any problem that may arise. The ongoing studies must also meet these requirements. Alternatives As long as the results from the experimental study phase are lacking all opportunities to promote new solutions to overcome the shortage of allogeneic donor organs must be considered.It would be appropriate to first take the following steps: -To intensify the prevention of diseases for which a transplantation is the only lasting form of treatment.-To request the specialists involved to come to an understanding in order to reach a broad consensus concerning the indications for a transplantation.-To encourage organ donation in the case of death, in particular through better information of the general public and by setting up offices for coordinators in all hospitals, whose task is to look out for potential organ donors and who are trained in talking to the relatives.-To explain to the general public the possibility of removing an organ or parts of an organ, and tissues or cells (kidney, liver, bone marrow etc.) from a live donor, giving exact details of the conditions and risks of such a donation.-To call on public and private institutions to support basic research and applied research in all fields associated with allogeneic and xenogeneic transplantations (omnipotent stem cells, bioartificial and artificial organs etc.). It is the task of the Swiss Academy of Medical Sciences to follow up this new technology in its short, medium and long terms, since it constitutes an ethical challenge for medicine and science.The SAMS therefore makes the following proposals: 1.All those involved in the public health sector, the general public and the authorities should be informed clearly and comprehensibly.This information must be provided continuously and must always take account of the developments in the field of research. The dissemination of the information is primarily the responsibility of the scientists participating in this research, the transplantation team and the treating physicians.The media play a very important role in this process.The World Health Organisation has decided to set up a "Xenotransplantation page" on the Internet.The responsibility for planning of the dissemination of the information could be transferred to a national Expert Committee (see 5.3.). A broad-based public discussion should be initiated on the aims of allogeneic and xenogeneic transplantations and on other potential solutions the problem of the shortage of donor organs. The organisation of a "public forum" could be useful in order to open the debate.As already emphasised, a clear line must be drawn between the xenotransplantation of organs and the transplantation of tissues and cells, which raise different problems.It is important that the costs and the benefit for the patient and for society are discussed, as well as the ideological differences existing within the general population. An Expert Committee for Xenotransplantations should be appointed at the national level. The task of this committee would be to monitor the development of the research in the field of transplantations, to create a national data base, to make contact with similar organisations abroad and in Switzerland, for example with the SCBS (the interdisciplinary Swiss Committee for Biological Safety), in order to achieve a harmonisation of multicentre research projects at the inter-national level.The members of this committee must include scientists (physicians, veterinarians and biologists), lawyers, ethicists (philosophers and theologians), nursing staff, politicians and a representative of the Swisstransplant Foundation (who is not a member of a transplantation team).The committee should be authorised to make expert appraisals of all clinical research projects in the field of xenotransplantation before they are submitted to the responsible Federal authority for approval. Economic aspects The whole field of transplantation medicine should also be investigated from the economic viewpoint.This will make it possible to weigh the advantages for the patients against the costs to society. For allogeneic transplantations, all-in tariffs have been established for the costs of inclusion on the waiting list, the removal and allocation of the organs, the surgical operations and the postoperative treatment.These tariffs probably do not include the true total costs of allogeneic transplantations. It is at present not possible to assess the costs for xenotransplantations, as there are too many unknown factors.But without doubt we must be prepared for a considerable increase in the demand; the costs for the animal organs will also probably be higher.The life-long immunosuppressant therapy and life-long surveillance of the recipient and his direct surroundings will constitute a major financial burden.All these factors require very close consideration. Approved by the Senate of the SAMS on 18th May 2000. For the Xenotransplantation Sub-Committee: Prof. Noël Genton, Lausanne, President; Prof. Bernard Resolution on the control of blood, blood products and transplants of 8.10.99. 1 For the implantation of implants of animal oriin humans, the approval of the responsible Federal Office is required. 2Transplants of animal origin can be implanted in humans within the framework of a clinical trial, if the risk of infection for the general population can be excluded with a large degree of probability and if a therapeutic benefit can be expected from the transplantation. 3Transplants of animal origin can be implanted in humans within the framework of a standard treatment, if according to the present state of scientific and technical knowledge a risk of infection for the general population can excluded and if the therapeutic benefit of the transplantation has been proved in clinical trials. Regulations in Europe and the USA 2.1 United Kingdom At the suggestion of the Nuffield Council on Bioethics an interim committee for the regulation of xenotransplantation was appointed in 1997 (UK Xenotransplantation Interim Regulatory Authority -UKXIRA).This committee is thus charged with drawing up guidelines for biological safety.It is empowered to decide on the approval of clinical studies. Attachment: Legal regulations, guidelines, Swiss and international recommendations Various publications and analyses France Law No. 98-535 of 1.7.98, the implementing statutes of which are in preparation.Xenotransplantations belong to the field of clinical studies and are subject to approval by the Ministry of Health. Council of Europe In the Recommendation No. 1399 of the Parliamentary Assembly of the Council of Europe of 29.1.99,a moratorium on all clinical xenotransplantation trials was suggested to the of Ministers.The Council of Ministers shared the concern of the Parliamentary Assembly but did not take up this suggestion for a moratorium, and took the decision "to create a working group with the task of preparing a project for guidelines in the field of xenotransplantation, which takes into account other works already carried out by other, mainly international authorities, as well as the necessary collaboration at the global level". United States Xenotransplantations are subject to control by the FDA and the CDC.The work is coordinated between these two authorities. Guidelines concerning risks of infection and their prevention are in preparation. All clinical studies require the approval of the FDA. 3.1 International Ethical Guidelines for Biomedical Research involving Human Subjects, Geneva, 1993. 3.2 In 1998 the World Health Organisation (WHO) published two documents entitled "Xenotransplantations: Guidelines concerning the prevention of infectious diseases and their management" and "The ethical aspects of xenotransplantations". 3.3 The Organisation for Economic Cooperation and Development (OECD) has organised several conferences with the aim of establishing a common attitude in regard to transplantations, and pressed for international cooperation in the field of biological safety in connection with xenotransplantations.A concluding report was published in October 1999. What Swiss Medical Weekly has to offer: Med Wochenschr (1871-2000) Swiss Med Wkly (continues Schweiz Med Wochenschr from 2001) Editores Medicorum Helveticorum In this debate the xenotransplantation of tissues and cells, which in Switzerland and certain other countries is already in the phase of promising clinical trials, must be completely separated from the xenotransplantation of organs, a field where the research is still in the preclinical phase and the future of which is still uncertain. • SMW's impact factor has been steadily rising, to the current 1.537 • Open access to the publication via the Internet, therefore wide audience and impact • Rapid listing in Medline • LinkOut-button from PubMed with link to the full text website http://www.smw.ch(direct link from each SMW record in PubMed) • No-nonsense submission -you submit a single copy of your manuscript by e-mail attachment • Peer review based on a broad spectrum of international academic referees • Assistance of our professional statistician for every article with statistical analyses • Fast peer review, by e-mail exchange with the referees • Prompt decisions based on weekly conferences of the Editorial Board • Prompt notification on the status of your manuscript by e-mail • Professional English copy editing • No page charges and attractive colour offprints at no extra cost International Advisory Committee Prof. K. E. Juhani Airaksinen, Turku, Finland Prof. Anthony Bayes de Luna, Barcelona, Spain Prof. Hubert E. Blum, Freiburg, Germany Prof. Walter E. Haefeli, Heidelberg, Germany Prof. Nino Kuenzli, Los Angeles, USA Prof. René Lutter, Amsterdam, The Netherlands Prof. Claude Martin, Marseille, France Prof. Josef Patsch, Innsbruck, Austria Prof. Luigi Tavazzi, Pavia, Italy We evaluate manuscripts of broad clinical interest from all specialities, including experimental medicine and clinical investigation.We look forward to receiving your paper!Guidelines for authors: http://www.smw.ch/set_authors.htmlAll manuscripts should be sent in electronic form, to: Official journal of the Swiss Society of Infectious disease the Swiss Society of Internal Medicine the Swiss Respiratory Society
2017-10-17T10:39:17.366Z
2001-06-30T00:00:00.000
{ "year": 2001, "sha1": "3e06eac498dd320fae7d96afda8720f45be19226", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4414/smw.2001.09735", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3e06eac498dd320fae7d96afda8720f45be19226", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5026307
pes2o/s2orc
v3-fos-license
The relationship between concussion and alcohol consumption among university athletes Introduction This study investigated concussion as a potential risk factor for increased alcohol consumption in university athletes. Methods Using a cross-sectional design, 41 university students (37% with a history of concussion) completed self-report measures, while electrodermal activation (EDA) was recorded for each participant to capture baseline physiological arousal. Results As expected, concussion status significantly predicted alcohol consumption over and above athletic status, b = 0.34, p = 0.034, 95% CI [0.195, 4.832], such that those with a prior concussion history engaged in greater alcohol consumption. Importantly, concussion status also significantly predicted baseline physiological arousal, b = −0.39, p = 0.014, 95% CI [−0.979, −0.120], such that those with a history of concussion exhibited lower EDA. Conclusions Elevated alcohol consumption among athletes is a pronounced associate of concussion in sports and may be a behavioral reflection of disruption to the orbitofrontal cortex – an area implicated in inhibition. Introduction According to Iverson and Lange (2009), head injury severity can be classified along a continuum spanning from mild to catastrophic. Injuries on the mild end of this spectrum, such as concussions, account for the majority of all reported injuries, reflecting approximately 70-90% (Cassidy et al., 2004). Concussionsimpacts to the head or torso that generate acceleration/deceleration forces sufficient to alter one's state of consciousness (e.g., feeling confused or dazed; Kay et al., 1993) are commonly sustained in high-risk sports (Gessel, Fields, Collins, Dick, & Comstock, 2007;Noble & Hesdorffer, 2013;Zuckerman et al., 2015). Indeed, over two academic years and 25 collegiate sports, Zuckerman et al. (2015) recorded 1670 sports-related concussionsthe majority of which occurred during high-risk sports competitions, such as football (36.1% of reported concussions), ice hockey (13.4%) and women's soccer (8.1%). Similar rates have been recorded in high school and university student populations (Baker & Good, 2014;Halstead & Walter, 2010) implying that the nature of play demanded by these sports may place athletes at a greater risk of sustaining a head injury (McAllister et al., 2012). For example, in a high-risk sport like competitive cheerleading, concussions account for over 30% of all injuries sustained (Currie, Fields, Patterson, & Comstock, 2015) despite the noncontact nature of competition. Given that the ventromedial prefrontal cortex (vmPFC) is highly susceptible to disturbance in any closed-head injury (Morales, Diaz-Daza, Hlatky, & Hayman, 2007), elevated levels of impulsivity and aggression are commonly reported post-concussion (Goswami et al., 2016), and concussion severity is negatively associated with baseline levels of physiological arousal (i.e., electrodermal activation [EDA] levels; Baker & Good, 2014;van Noordt & Good, 2011). Disruption to the vmPFC and subsequent post-injury underarousal limit one's ability to anticipate negative outcomes in unpredictable/risky situations (Damasio, Tranel, & Damasio, 1990), increasing the probability that an individual will engage in impulsive, aggressive, or risk-taking behavior. In particular, according to the Somatic Marker Hypothesis (SMH; Damasio, 1994), damage to the vmPFC compromises the regulation of physiological arousal cues, resulting in dampened autonomic functioning. In turn, this puts individuals in a physiologically unprepared and uninformed state, leading to greater addiction, risk-taking and impulsive behaviors without sufficient somatic signals to inform cognition and guide behavior (Verdejo-Garcia, Pérez-García, & Bechara, 2006). Indeed, patients with severe damage to the vmPFC have been reliably found to exhibit significant difficulties in decision-making under situations of uncertainty and greater risk-taking (Morales et al., 2005). One risk-taking behavior that has been investigated post-injury is the increased engagement in substance use (e.g., binge drinking, cigarette smoking, etc.; McKinlay, Corrigan, Horwood, & Fergusson, 2014;Bjork & Grant, 2009). Although alcohol use tends to decrease during the first year after injury (Ponsford, Whelan-Goodinson, & Bahar-Fuchs, 2007), many studies show an increased likelihood of heavy drinking in traumatic brain injury (TBI) patients (Koponen et al., 2002) and a strong positive association between time since injury and alcohol consumption (Kreutzer, Witol, & Marwitz, 1996). Moreover, self-reported TBI has been linked to increased substance use during adolescence (Ilie et al., 2015) and, further, concussions sustained during childhood are predictive of problematic alcohol use in adolescence and early adulthood (Kennedy, Cohen, & Munafó, 2017). Given that dysregulated physiological arousal is common after concussion (Baker & Good, 2014;van Noordt & Good, 2011), it is hypothesized that attenuated arousal may play a key role in post-injury alcohol consumption. Specifically, since those with a history of concussion exhibit dampened somatic activation during the anticipatory stages of decisionmaking (van Noordt & Good, 2011), they may be unable to anticipate the negative consequences associated with drinking behavior and impulsively consume alcohol in excess. Alternatively, alcohol consumption may serve as a solution to chronic underarousal post-concussion and individuals may learn to consume alcohol as a means of boosting autonomic activity. In both cases, it is proposed that physiological underarousal may serve as a mechanism of increased drinking behavior after injury. At present, the proposed relationship between concussion and alcohol consumption through physiological underarousal remains theoretical and requires further investigation. Research shows that elevated alcohol consumption increases one's risk of sustaining a brain injury (Silver, Kramer, Greenwald, & Weissman, 2001), making it difficult to determine the extent to which premorbid risk-taking and alcohol use precipitate head injury. Indeed, athletes exhibit higher levels of sensation seeking (Hartman & Rawson, 1992;Mastroleo, Scaglione, Mallett, & Turrisi, 2013;Schroth, 1995), more frequent alcohol consumption, higher rates of heavy episodic drinking, a greater number of sexual partners, and a greater engagement in unsafe sex compared to non-athletes (Wetherill & Fromme, 2007). Thus, those with riskier personalities may be more likely to seek out risky activities, such as high-risk sports and binge drinking. For instance, athletic status is associated with a greater number of problem behaviors while under the influence, such as getting in trouble with the police (Nelson & Wechsler, 2001) and sustaining an injury (Leichliter, Meilman, Presley, & Cashin, 1998). Thus, athletes who engage in behavior that is risky enough to result in a concussive injury might also be more prone to engage in other risky behaviors such as excessive substance use. Alternatively, others have proposed that alcohol may serve as a means of coping with sports-related stressors, reinforcing athletic performance, or fostering belongingness and involvement in sports culture (see Martens & Martin, 2010). For instance, athletes who have greater involvement in their teams (i.e., team captains) drink more per week and engage in more episodic drinking than those who are less involved (i.e., second-string players; Leichliter et al., 1998); thus, it may be that the added time commitment of athletics causes greater stress and maladaptive forms of coping (Damm & Murray, 1996;Marcello, Danish, & Stolberg, 1989). Few studies, however, have found support for this idea, as athletes do not report coping as a motivation for alcohol use (Green, Uryasz, Petr, & Bray, 2001;Herring et al., 2016). Similarly, the association between alcohol consumption and social and cultural motives remains unclear since many factors influence this relationship (Martens, Dams-O'Connor, & Beck, 2006). Taken together, the above findings highlight the need to elucidate whether elevated alcohol consumption in athletics is reflective of pre-injury personality characteristics exclusively or whether the increased prevalence of concussion in sports, and subsequent dampened physiological feedback, may contribute. The aim of this study, therefore, was to investigate the potential role of concussion as a risk factor for increased alcohol consumption in university athletes. In particular, given the established relationship among head injury, decision-making processes, and physiological arousal, the current study sought to examine EDA as a potential mechanism of the association between concussion and alcohol use. First, it was predicted that both those with a history of concussion, and those classified as athletes, would engage in greater alcohol consumption than their non-concussed and non-athlete peers. Second, based on previous findings of physiological underarousal in concussed individuals (Baker & Good, 2014), we hypothesized that those with a history of concussion would exhibit lower baseline EDA compared to those with no concussion history. Lastly, we predicted that concussion history would be associated with increased alcohol use, over and above the effects of athletic status. Participants Forty-one Brock University students (M age = 20.71, SD = 3.95; 19.5% male) attended a laboratory session in the Jack and Nora Walker Lifespan Development Centre testing facilities on campus in St. Catharines, Ontario, Canada. Poster advertisements, standardized recruitment PowerPoint slides (displayed in Psychology courses offering course credit for research participation), and the Brock University Psychology Department Research Pool (SONA) were used to recruit participants. Importantly, to eliminate the potential confound of diagnosis threat (Suhr & Gunstad, 2002;Suhr & Gunstad, 2005), participants were not recruited on the basis of head injury status and were only informed of the authors' added interests in concussion during the post-study debriefing. In line with previous investigations (Gallant, Barry, & Good, in press), the primary sport listed for current participation in university athletics was used as a means of classifying athletic status, such that 26 individuals self-identified as non-athletes (63%), 7 as low-risk athletes (17%), and 8 as high-risk athletes (20%). Of the 15 self-reported athletes, 10 (67%) currently participated in a recreational sports league and 5 (33%) participated in a competitive sports league. Table 1 contains a list of all reported sports affiliations and their associated demographic frequencies. Fifteen participants (37%) self-reported having sustained a previous concussion, while 26 had no such history (63%). Of those who endorsed a concussion history, 8 (53%) were non-athletes, while 7 (47%) were athletes (2 low-risk and 5 high-risk). Given the low number of low-risk athletes with prior concussions, athletic status was collapsed to form two athlete categories (i.e., athlete, nonathlete). The average time since injury was 91.27 months (7.61 years) and ranged from 6 to 288 months. For more details regarding the severity of concussive injuries and the associated demographic Table 1 Self-reported sport-related activities currently played in University (n = 15). Technologies, 2008) on a 16-inch Acer laptop computer, EDA was recorded as an index of physiological arousal via the Datapac USB 16-bit Data Acquisition Instrument and two silver-silver chloride pads. In particular, the silver-silver chloride pads were placed on the index and fourth fingers of participants' nondominant hands. Electrodermal activation (EDA) amplitude was used as an index of physiological arousal in the current study, as it has been shown to be a reliable index of autonomic nervous system (ANS) functioning (Fowles & Schneider, 1974). Self-report questionnaires The Everyday Living Questionnaire (ELQ; Good, 2008) is a demographic questionnaire that collects information on participants' age, sex, education, medical history, leisure activities, and recreational and athletic involvements. In particular, the ELQ was used to determine head injury status and to assess alcohol use. Based on the ACRM criteria for concussion (Kay et al., 1993), participants were asked, "Have you ever hit your head with a force sufficient to alter your state of consciousness?" Participants who endorsed this item were classified as having a concussion history. Alcohol use was measured by a number of self-report questions embedded in the ELQ (e.g., weekly alcohol use and average number of alcoholic drinks per outing). The HEXACO Personality Inventory -Revised (HEXACO-PI-R; Lee & Ashton, 2016) is a measure of the six major dimensions of personality, including: Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. For the current study, the 100-item version of the HEXACO was used to assess each of the six domain-level scales. Since pre-injury personality characteristics related to alcohol consumption are most relevant to the current research questions, the Extraversion, Agreeableness, Conscientiousness, and Openness to Experience domains were of particular interest to the current study, given their previously demonstrated associations with alcohol consumption (e.g., Hakulinen et al., 2015); the Prudence subscale from the Conscientiousness domain was also examined since it captures the tendency for risk-taking behaviour and impulsivity (i.e., high scores on this scale reflect an ability inhibit impulses and consider their options carefully prior to decision-making). Procedure Upon approval from the Brock University Research Ethics Board (#16-047), participants were tested individually for approximately 90 min. After providing informed consent, a 3-min recording of baseline EDA was obtained, following which, participants completed the selfreport questionnaires and were debriefed. Data analyses The Statistical Package for the Social Sciences (SPSS) was used to compute all analyses and examine all assumptions. Unless stated otherwise, all statistical assumptions have been met. For all analyses, a statistical significance level of p < 0.05 was used; those approaching statistical significance, however, are also discussed. To examine categorical group differences, Chi-square tests of independence and independent samples t-tests were conducted. To predict alcohol consumption from concussion history, athletic status, and arousal, multiple regressions were conducted; all analyses were conducted with and without sex as a covariate and results did not differ. Demographic information Given that previous research has demonstrated a relationship between drinking behavior and sex, age, and mental health status (Horner et al., 2005), Chi-square tests of independence were conducted to examine sex, age and mental health differences by concussion history and athletic status. Further Chi-square tests of independence were used to observe any differences in rates of concussion as a function of athletic status. No associations were found for any of the variables examined, as well as for any of the health-related variables (i.e., hospitalizations, diagnosed psychiatric condition, medication use, diagnosed learning disorder), use of services (i.e., physiotherapy, occupational therapy, learning resource teacher, or educational assistance), self-reported alertness, enjoyment of life, and life stressors (p's > 0.05). Further group level differences between the concussion versus noconcussion group were examined via t-test statistics for each of the HEXACO personality dimensions. In particular, no differences were found between the concussion and no-concussion groups for the Extraversion, Agreeableness, Conscientiousness, and Openness to Experience domains, or for the Prudence subscale (p's > 0.2). Alcohol consumption by head injury and athletic status A linear regression was conducted, such that alcohol consumption (drinks per week) was regressed on athletic status (athlete, non-athlete) and head injury status. Neither athletic status, b = 0.04, p = 0.829, 95% CI [−2.459, 3.051], nor concussion history, b = 0.12, p = 0.486, 95% CI [−1.845, 3.807], significantly predicted weekly alcohol consumption. A separate linear regression was conducted, such that number of drinks per outing was regressed on athletic status and head injury status. Interestingly, and as expected, concussion was found to be a significant predictor of alcohol consumption per outing, b = 0.34, p = 0.034, 95% CI [0.195,4.832], over and above the effects of athletic status (Table 3). In particular, concussion was associated with a greater number of alcoholic drinks per outing. Fig. 1 depicts this finding, showing the number of drinks per outing as a function of concussion status. Physiological arousal A linear regression was conducted, such that baseline physiological arousal (i.e., EDA peak amplitude) was regressed on athletic status (athlete, non-athlete) and head injury status. Concussion was found to . Specifically, university students who self-reported a previous concussive injury had significantly lower levels of baseline arousal compared to those who did not endorse a history of concussion. Fig. 2 depicts this finding, showing baseline physiological arousal as a function of concussion status. Further linear regressions were conducted, such that alcohol consumption was regressed on EDA peak amplitude and athletic status. Neither EDA amplitude, b = −0.27, p = 0.100, 95% CI [−3.537, 0.322], nor athletic status, b = 0.02, p = 0.895, 95% CI [−2.519, 2.871], were found to be significant predictors of weekly alcohol consumption. However, subsequent analyses revealed that EDA amplitude was a significant predictor of the average number of alcoholic drinks consumed per outing, b = −0.34, p = 0.034, 95% CI [−3.475, −0.144], over and above the effects of athletic status, b = 0.02, p = 0.884, 95% CI [−2.209, 2.554], such that those who had lower baseline levels of physiological arousal consumed more per outing. See Fig. 3 for a depiction of this relationship in terms of drinks per outing. Lastly, those with and without a history of concussion were examined independently; in the no-concussion group, there was a trend for athletic status (athlete, non-athlete) to be related to alcohol consumption, b = 0.35, p = 0.096, 95% CI [−0.366, 4.179], such that athletes reported engaging in heavier alcohol consumption per outing than non-athletes. Moreover, EDA amplitude did not significantly predict drinking behavior in those without a history of concussion. In contrast, in the concussion group, EDA amplitude was found to approach significance when predicting alcohol consumption, b = −0.49, p = 0.056, 95% CI [−11.739, 0.177], such that lower EDA amplitude was associated with heavier alcohol consumption per outing. In addition, athletic status (athlete, non-athlete) did not significantly predict drinking behavior in participants with a history of concussion. Tables 4 and 5 contain further details regarding this relationship. Discussion The purpose of this research was to examine concussion as a possible risk factor for elevated alcohol consumption in university athletes, as well as to investigate physiological underarousal as a potential mechanism underlying this relationship. Although a number of studies have identified factors that influence heavy drinking in athletes, to the authors' knowledge, this is the first investigation to examine the effects of concussion on drinking behavior, over and above the effects of premorbid athletic status. Results indicated that individuals with a history of concussion report higher levels of alcohol consumption per outing than their non-injured cohort, over and above the effects of athletic status. Further, those with a history of concussion exhibited significantly lower levels of arousal than those without a prior concussion. Contrary to our first hypothesis, it was demonstrated that athletes and non-athletes did not differ in terms of drinking behaviors, despite previous research indicating that athletes engage in greater substance use (Damm & Murray, 1996;Wetherill & Fromme, 2007). This nonsignificant finding may be explained in part by the similar rates of concussion observed across non-athletes and athletes. If head injury is a primary factor in drinking behavior, then differences in alcohol consumption would be expected when rates of concussion vary according to athletic status, which was not found to be the case. Nonetheless, it is also possible that due to the small sample size of the current study, there may have been insufficient power to detect differences in drinking behaviour as a function of athletic status. As expected however, it was found that participants with a history of concussion reported significantly greater alcohol consumption on a per outing basis compared to those with no such history, but this was not found for frequency of alcohol use. This is consistent with previous research showing that individuals with TBI are unlikely to be classified as light or social drinkers (Kolakowsky-Hayner et al., 2002). Indeed, Kolakowsky-Hayner et al. (2002) found that while approximately 50% of individuals with brain injury refrained from using alcohol post-injury, 43% were classified as moderate or heavy drinkers. In another study (Corrigan, Rust, & Lamb-Hart, 1995), it was found that while individuals with a brain injury engage in significantly greater pre-injury alcohol use when compared to the normative population, approximately 20% of those initially classified as 'light' drinkers or abstained from alcohol use before injury engaged in heavy drinking behavior after injury. Further, concussion status predicted drinks per outing over and above athletic status. This finding is consistent with the idea that risktaking behavior (including elevated alcohol consumption) observed following concussion may be a function of injury and physiological underarousal, rather than exclusively preinjury characteristics (e.g., riskier personalities such as sensation seeking or impulsivity). Indeed, no significant differences in personality factors (including those assessing impulsive tendencies) were observed across the concussion and noconcussion groups in the current study. Among the somatic symptoms observed following concussion, dysregulated physiological arousal levels are common (e.g., Baker & Good, 2014;van Noordt & Good, 2011); the current study replicated these findings, showing that those with a history of concussion exhibit significantly lower baseline EDA than their non-injured cohort. Moreover, our results provide preliminary evidence of a link between physiological arousal and alcohol consumption. For those with a history of concussion, binge drinking may reflect an attempt to increase arousal levels, as opposed to representing thrill or excitement seeking. Those with a history of concussion have been found to exhibit dampened arousal in anticipation of decision-making, leading to disadvantageous and risky decisions without the somatic "markers" to bias their behaviour in a socially-acceptable manner (e.g., Damasio, 1994;van Noordt & Good, 2011). Therefore, the increased alcohol consumption after injury may be reflective of a reduced capacity to anticipate outcomes. After injury, individuals may be insufficiently alerted to the potential dangers of a given situation (e.g., the negative consequences of excessive alcohol consumption) and unable to implicitly monitor their environment via physiological cues. Indeed, previous investigations have shown that concussion is associated with increased risk-taking (Buckley & Chapman, 2016) and substance use (O'Jile, Ryan, Parks-Levy, Betz, & Gouvier, 2004), despite the fact that these individuals are capable of accurately assessing the risk level of a situation (O'Jile et al., 2004). Limitations It is important to acknowledge several limitations of the current study. The correlational nature of this research restricts our ability to make causal inferences regarding the relationship between alcohol consumption and concussion. No reports of pre-injury alcohol use were collected and group differences (i.e., such as greater alcohol consumption and lower physiological arousal) may have existed prior to concussion incidents. Longitudinal research designs, establishing preinjury behaviors prior to assessment, and more thorough access to historical behavioral profiles could assist future investigations. Another limitation is the use of retrospective self-reports of head injury and alcohol use, self-report is more reliable when supported by corroborating information (Paulhus & Vazire, 2007). Nonetheless, since many head injuries go undocumented (McCrea, 2008;McCrea, Hammeke, Olsen, Leo, & Guskiewicz, 2004), supplementary information is often unavailable. Further, when asked to estimate alcohol consumption, individuals tend to underestimate usage (Stockwell et al., 2004), thus, a better understanding of alcohol consumption in individuals with a head injury could include more precise monitoring of alcohol intake (e.g., proactive reporting, observation, record keeping). Another consideration would be a larger sample size which would provide greater power for statistical analyses and an opportunity to examine possible mediation effects associated with physiological arousal. It would be interesting to examine whether physiological arousal mediates the relationship between concussion and increased alcohol consumption and to investigate potential moderating effects such as drinking motives. Furthermore, the current study consisted of university students and did not include individuals with a prior concussion who did not pursue post-secondary education. This population may not reflect the general population in that they may have different coping or adjustment skills that have permitted them to continue to university. Moreover, although university students have been found to engage in greater alcohol use compared to the general population (O'Malley & Johnston, 2002), the current findings of greater consumption in those with a history of concussion compared to their non-injured peers are consistent with non-student populations (Corrigan et al., 1995). Conclusions This study demonstrated that university students with a history of concussion consume more alcohol per outing than their non-injured cohort and that concussion status is predictive of alcohol use, over and above the effects of athletic status. Importantly, it was also found that those with a history of concussion display significantly lower physiological arousal than their non-injured peers. Moreover, despite previous literature indicating that athletes engage in greater alcohol consumption (Wetherill & Fromme, 2007), athletes and non-athletes did not differ in terms of drinking. Further, physiological underarousal was associated with greater alcohol consumption. Taken together, these findings imply that drinking behaviors are associated with concussion status (i.e., physiological underarousal), over and above athletic status; individuals may consume more alcohol post-injury due to physiological unpreparedness. Role of funding sources Bradey Alcock and Caitlyn Gallant held funding from the Canadian Institutes of Health Research (CIHR) Frederik Banting and Charles Best Canada Graduate Scholarship during the preparation of this research, and Dr. Dawn Good is affiliated with the Ontario Brain Injury Association (OBIA) and Ontario Neurotrauma Foundation (ONF). The CIHR, OBIA, and ONF had no role in the study design, collection, analysis, or interpretation of the data, writing of the manuscript or the decision to submit the paper for publication.
2018-05-03T00:19:16.288Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "bdec4121981b1e05b8e976e7252d95b3a81f0c23", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.abrep.2018.02.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bdec4121981b1e05b8e976e7252d95b3a81f0c23", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }